var/home/core/zuul-output/0000755000175000017500000000000015145107122014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145123575015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000346025515145123405020265 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9Gf8^(w3^M~:U狿h[.|yo~\n6^yzW㔬-b6"οƼ>UWm׫Y_?|uݗ[y[L-V_pY_P-bXwûxwAۋt[~ _P^~&RY,yDy~z]fs,l<L& " d :o5J=nJw1f /%\xiƙQʀClxv< |N ?%5$) y5o? fۮ?tT)x[@Y[`VQYY0gr.W9{r&r%LӶ`zV=Tooz2¨(PQ wFh k0&S V3M.*x6Ql"%qYHzn4}*|dd#)3c 0'Jw A57&Q"ԉQIF$%* tJ,љ:72 2Xi}i` P}(*a-ٝN2,PO޺,IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J o~I|3j4 dF{ "IΩ?PF~J~ ` 17ׅwڋًMV?]BŊ6EZ|^%L[EC 7gg/碓 yi\[.!=A(%Ud,QwC!b]}F*VYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4 wLdW3RG؍:-~<*KmrI,״+i̸.ᑇy ^t }|#qg9b2oII"9 1"6DS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeIVFδ֯`PÝr JovJw|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n@^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.bqMWkB.yhi-cSDCR6"KaFٗt<>vRڡc0SAA\cH{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & ">[@Kfځ9g ? j럚Sř>]uw`C}-{C):fUrG C;Fg@(r8I_|AcH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad [kP+<, {Z5׷!)'xmv0;|!B`0p1y6 PM3rr1TZ')*R ,j>Eeք Dj7NI0[EΰPaySwX7?T4}, l s}ґ4nev$\ ʀu*,L(EĈhz[}&+pzk0%X8Uae 4݃bɩMf8t݃&(sj 44찲XA E#EPV_WWUbU5ì"M|܊W7|}N;6od NN&DǭZrb5Iffe6Rh&C4>Qwf8*c4˥ĘP0W YW ].P!_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGAtĘud7"/6sF%%´ƭ*( :xB_2YK]7.w47mnjGgG{9_e552s4IGx A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ءp.֣?5t罦-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAp)vDa;|/֦ I<)tKl3GIĨmIEQ«` RPZ(D2vh>3fsGiZSس:$3w}o$ <S1wg y &SL9qk;NP> ,wդ{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kޏ=S-ܑ2ƹڞ7կZa{0dqw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%<~ X򴱞_aM:E.Qg1DllЊE҉L ehJx{z{tKmdߟ9 &2vA.:Mw~^`X\u6|6rcIF3o!C>Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0aT&O_R-%[ R'l}ʰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|-4d XrH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{Ba>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qgj8¼|c3Ӄu$J!J- `ުĆ YԳO=\$ѣ+u ۣCT:gRU k_*Es#.W2&r vFQVga})3 m62f%%B Y7Mq 72H!b_h`x#C&-{sTQ #QYd.x73'sJH t**feYŇ{\W={?CfN:XCJFYE10]@ 2O=d.h, :=%wab`~uu}Y{x:"-K5+Wafjop:PM/Ó7UV}DnX[5Fk7Tb+X, sKHD-]{Fy7=n@H2MBD#  м XJ ޔE#M_|8=Y6tPɜ :`X;;_54<7U\Ԉg=UJ) k L]7 9hX'AXO:=Ht8f>?_qXw"Jzȁu(Mc傄F{}5VBþ^E^a^RBHR#cй)ZwM8ikk*T|ܚ#Ã]9K{;cP k ݒ14*ch׀7Ԁ*;>O擙:h4HKj?^ŋaZ"ݑF0GRΔXJ\z=07*xi3EXǛItޞsK!#W( H.gLJ>I &A_gӻXuЎ@* =Wl\dM"{8[jSW&:"s(e QFp&FW@\\ WۃjJ=y2M=BfsXӞu9A9f Yu5S:ªXeF= ad^pEgȒwiuzz ĸ?@9d5=\HWR)H ݪ@{i &ڥ "&xB8m钄%$IqgzV5$1n Am/יWKY8уdɷO,`Y7/, 5;pvMXKM]oPxG>%-`E@*ϑs[6)g^J! n!c5h 82*b =kDx+ϑe8/~h>)m-ﻂDP` Feg =k$*Ƿ흟Bb@5{ J">ˌ$"<m5ڦ=:mh2K4Z>(#οcI-Pu Pd6$Ggixɼ89_"R3d2f{3wһh|oNN>x/ﻑg>5GxceDKa=8%o{ᥕ4e̛PjuH< / !v"U28֕^_}k52:/)xGYP=/x 8ԔD  8Y`\H>6%X;&IOטj)ǛNiPOnvjv8x~sV:+w/Wua4bI$ ʬoNOjd.=%fml1D4@ 7#:zf\XrE*ϳb{L X[quf/pz8Mړl>_uN.@DoI1ygjlw5!7eoפ_[#G޴ÏNnԦ=f&!0P$$/XNmAsA#z! Ys5/LC)ng%zփ!Ʊoh,1ԝE^Uܙ_~;:_o٤>%jy":DzkS2 Qz/4?Iۃ)UK²ov㶝տ혹ޣu ,%k,:-65c4bI}8 ]s_e+,k"$*Z^ZfmОo$.`,كsC`!1@PEiSFi^^ % ^vE0 q02ߙi0|B+ڳ/ՊuT*t"9Uİ< Ty## sW|A=Nl <__Xp:X JG]x獻R ϴ)Y#Fwc(lpL䁀9Uݥ:7=tld{ g="  򜳫Bsz9]XZZi)0M`>@NUGXwa}+c~0{aKS[1G:8:]:] և_~-﵌+b*˦$0oC#&HP yL NqtW׏#l5uBYpA:<_n`3goSΒQIXkŝVJ\`F`Jx6FP:-&ZxM!xú>;黻0a`fق҉5%>\X %Bu4g/·zTqA!/(@5PnjI]tIy~v}]iptyAfrISiߒ`QB;*"c}ni_ 'UȸB&$wZH v iP%ir1A&& P}cR]#_~uE=%BJZZtu<_W?t]b"wW>or8s`ik@0|%_&]>|/%>I4L}f15ɿwK*J2{W$Q˖jS]6,m! zK4rdeʚƀg]XaE.# IZ=z,^։_vi M,!kz:)a},KQ:Ū`U@BY']ѐcSh4{@A'Ұ(~r2g 1R1qJB0oXJP)@V$΃">n?Gٺ陸 m#9U$e 8gk#Y+JAhn*ӗa^sC !Cz7aLObwn.6 4q{?$Z?ngk 枤F*wv:p (vnJֽb\#THR,_F)5x57'4tr>Ah}JVŖimEhS .bupE$|{ϾDz#kZ"DUIowU>I2nIe!wK+#KFts}SlbJ!"?%tR\BΆ}e|>j@]Ω/\dlIi?D饿2l_O25y:(ňŋf{jf=dicd.5&l"o QRT єh?WGyM|q[% A>&J~wY0_A;O{ bWm٭/e> y>~!$%zs񢔠 QK@&؄Kc3*J1keG%|Ix=I1pLޙ(m4}!^@@$޼`'SO>LT%n;yc!>N>/O?1fl{\3ȼ.)”W >Oȶb> 8yӪΚkdp| G{;4^l{PȱdA ՀCxcSMTf3J1 R(*ȴl.^Q('@l7=iKitB L+BT҂=ll魳cf[$ F!c%؍:ٕTzqKh! bK@* *#QӃ\sAގXpljl ^,wf2AU9c>ˇX؛fqT\X):UiM2r>2{q>OڪRs )B: ۊZl{6LNHY g<cmN:85Qt0E_fNTU*K&+5q0mS换ohN\Uz3b ZFNM~@6 g Sl58*:gSaOAc J~v9ƟE q&gnN}JTWQ)vBی:D1c+ *m0W4Q>@>lW"A X5Gݴv]pEO>NOI[ 1,j2C:+34ำqA*\qb'YpuHT)|UkC.r`˱Lϰ{ xr)~l-ɩܿ*7DNXtiK E%V Ng2ޞR d5ȧtOt/ߊN| fu"Χ@#=U'_y)g9| {b`w ·B:M#_k%+1L#BR*FJ ߏ3j׾,NNǽ"] rT [087?1a@P5B,ݖc}jc઱ 2hw{5}sR~  ` Q}llK`;b6a>@'@—>9VAze"l |dv;)=l$-M`>3X: P/.%d1ؑHmm){W\hxtD78*>lX cU8U{$)mܛ^|8zVOz7~~);qUt,r̙BDݿ-xpyV}d5܊ܘmBՔȫvPk}0ciuvS-~zߊ K1PHO9pM.~9gs`k愨Cplm@+@'E*覽L|X̀`Z>m?" 1a; W;" Sݰ窜̀'i™:HG,?_C5tBYpm_ʆck&_I_Su>o>&lrA%ElR&Ϻ<>^==?~7xU?>>z7OY{Q?\"߬Vս+q&[ڲ&ZA"@+9Z'u*tI8%pO·-y>Lh!^*,o_V6Z;4AQt0ʹF5KQ 3l૎s1@:G|'1O$ +Jp[q@8)la.~Lr 6$E|<ޭ$൘ G7qi#ނеt%sx]}ŹrD[^F\`'Be-~ZW8*HqOȸVH5:sXӄc)?W`EN*|\v aVT0"#tvd y &;]P1w]<Y1X?m&yO/Ddv .z{I>|!4Ăw2j.ԡD6Xb]e)ߛA 2J1SGpw>ٕ(WAѱ nb;pV ~WO+18UB6VNO8Vttᐜ*ѝ8A_:ϋ);KX!$$]1YINJ2]:Қi >] چ.lL t1]Cr޴kӔ[R*)X$Ic>EK#e 15ۑ_dNQVݸVC6{җzEU"L 09mBu-ӎhe1u CWL;oG^R X5)aRߦ[_Vs?" Iڽ]A12JQig7ȱHoD:EUپOY>Wqå} .[c&SX( Qz~j@-E} m"8_hץ|r]^#}8B*Am.SEh:YnG\~3:&58*: gS!効<9ViCb!s1ctӒNc_:SE Mn޷WQb0M^IVic$"FhQ|![NIK q~,Jc%+8h&4II36V 8Vbv"ŏݙmn&ZX[ckwRpA;d+w!e rr[݊/V+@;NRy2ЯdS#v!{ @dlG]#>aJP\vc"Kjt-1_$A$Dfj=أ^o%<"HȯTgOرBӆI t[ 5)l>Mdc5u= A@kU#cJp րj6M>dUN/a\F,M1:Y6`ƀ$56%}N'OKx%'#t+MBJp `tɪez8|hUmi')XٛqUhhCN+JL֩aQçݚMvư{'j(f)}_vq@nuD@/ҁx*.L!Uc3F KwtOOM-™JAM'C/زR9f?q>hӈiJeV"&+ED>M-@7aV]XYK}pc?H_?CU"꫒? `a7N6ux٘d|,/669ʀ,EʿkڍfK54N!jp.Pٱ ҋ9W@/iIaZf% >%1׭vM!:ɋCBc6ɮu-" ѫQow#z)C^0\l[Ohl"য*6 ny!袰{!./EpԻ]В|Dž8Hi7.cZ0ϕ!1Q%ƹJ4^3O!5Z~vˌJ`.3Oaz.TMk9gk-T4G7! ^O7E"`W28" Pbn7?ϕ!l>g/EU:غA>?=CֻP8B\pup6_3XqCGXz=DH9:o pcMו8`n{<sz1t?_a2=8j0rO"کɸHd"yc PJ:<+AOZ]y6 ö/%D<=-кQ^oAnv-{)xǺ--pcl:WLg-Ӂ#vXǧc~f+ Q;eqp5;OU!F>j\FW+[3= !YWX vZ}|>\T%9dp-9Un# zLd%~gA%EL{rU÷㺽~%uY(nE|sUz<=6U[FTߋF7}qw/kQ]ͭy.D=_{U\?<^}O;ߟ"Th $$8V'Z2 &+Bs=8'kP=4hf 1h4i R8UXi7f ;VoiBi~MefZ7V >:/?/Ac 1M/ tsH ǙB?}Kf@rK-͗\H+ϟYSUגT0]'OSju&[V z[{X6"X#I/rʑ߽|{W6Ҥ/%x ld2;LhRylͯ*ev*Nov=U].ٍM?Mmp<&D5m cUM;9g IVAr.-ldw dןl)*L&%`6E)*ILif~jPhƏUX\V#ʢ+mlʱ4ƪ(H8\\IxB UaSVssg*8;nϒOyTH?!Uaj?o!kG2)D{QU)qE8i ƪQ*ƻ!Ǫ fU-"auo&vsl'?, ϣ AO= Kc|x8Jg:Ӣ^")wqq=^S>-8!wjs))SRbtOlɩVsƗFdK+`o2Z{eq^ly{Y\alO,,vl/HM#l#nF9 5z1ciҚir4]qd{@V"/뙯A9_Ei}O'Je˗;۾C!VAțla<oxQhw'Yd"|AmǮVYs,?~Azx? baZ;Q헤!n/es RGyʊt{uൺ"o s7_EϦSS7 ݼ0 ݽЃ ˙Z`]hӨG-( ~"K oWym5-,@+&#˵uX}+iqȓnnz?EH~}y*eb"6B% ]x)2~wT)dnp_;u5u;%2gnlF .}2r6jŒf.dUՋD/U];fpOaZR5Z+ʼ(U?2۞}f[FhGA,e;.7l|aC~`Ƒ͓֨0oUy\O~tL`)| b%d(q']ug~uVh]I{dz.S6 -#tr!n29HbIGhg ~;~>#}"Ddc^cT7|߱&~>Lk0<풡%n!YdЈ Z,.:lݟ3ӿ. b+Wl옂RY'⬵Gݑs5Wg\9Ij8$9I9U^^~Kq"&a2z6U$ } ͉ AQrpq^Û4KXFx0T 0EH=ɱ_2O>h%!  Ca>d3%a7v_Sx|QWG["0=2:';$o2:>#=r6=e8>挿e $0^Wȵy!meq`zI <'A^,?) G!;`ww?>%/O?=搻  -|(8X|MP ݻb[9(!83,?(gY DХCpLW<:?T_xH@a1t\NdBv_?Krd|K`gs,ؓ5ߞ%yC ^\K-]y'+&/o8JH3.{p,m tބig2\wwEB۸ZI P&\tʧ{ֻUR 2*OIø{ե4ѳHB5lG:{=@SMvo~,G ]L{`z?eD2TDe,6k{X +sT˲ է'$>%[|eD=;H~|ۻSb9svw^tAHsww'Z?7 /X]9H*j눿eѥ4yH8}F U&s#VteI3y;Irn:4,H7Mر_QSztekY%n L@H- tsL.'C6vדuk5Ѐ;ʻ%mgoXV BU6E}̦` n6 d/H@pV}$$hurO) 8^2\2HVow/o߁0*ognUm&:gwne3~Vai!8ץ`qV҃͜4a Qߓ_>zisL<i5PPJp* H o]UlMQ|Hh_A(]ExpEKd*C'~\)js!CԮDÃ%0hx`zECbn! 䊭R?]ן#ъ!!ix,2ʄy^k{fEG3 KԝJ<.$AqRk4q$hc3_N\VF;PўA2&jBlEU8< nCh_ $  4FtU *Le9*5掘3r`%gQYsU=aq౟4 (Rez#Ɨۻ.ڏS4@- "9`ϥ[fMckH]=J< ,8h߼!2`ȥ'h-_M#f̊? iŕHCqAy0t vMk.WRW-;SnG`6Nw /2.qռ kUt{+\|`xOs9D؊8֖@ϳe[펟y -ܺ@#ݿ&ZfB:ljŠlEKkmC 1kH3C{5,_ɛ<lK 1<=/XKoKudJV oX nJ%*ٽNL9!$ ʳ?ؽeQq-k p h#m:ٴҚgy)n29`;08bXv1Id׀ ¿bu eKƶ*J6(|&Z{_aLmͅʜETR#;:d Ts% l1s ;oG eo{x]Tu^s\!M6Ztsn<@5v0H5m#m! ^s:|Zѵ- 'ֵm/z-1Ҩ]PKE!'oaE\US 9TFꪶUI^`6 ̖nܕj_aCk(H]U۲aw${j7]]ӞAuMT*Ϧ`xT "VH+MVy?p1G0\ѷlKEh`ZSC+1d㤩C qީ>;Dt=:d=.z]ɖFS~Z4-u=*{)۫ ڶR_!㷔[ ght %#=<5\FRwv' &˳H BKTg0iiw6HCt/'L%(Ou`1U d*_'E$,Iu;Yܪ=<< @opaʠs2J`D fˢQ/ P>@'"WP'c˰wCȼ 8\wOFnt|@ً_VҪ?yx7 b ݗ!ƴcf0@v/{Bt rӏo ;Il} Ƨrg60BUlb w^+:l=]:EAʥ kG&9yp9{59z,9X! pt=;t˷wӭ,G7!w-̍ZS{@ PI?&5ܤ3U]) 8a_T=Gb'DtٝLǑYwob`/+Yr,dJX禖o"dPԂb=Έg"/$a}z/1b=x-p4 @P(nOYއ5ng!K`ϣ>>pp*}eCKP4TzU˫ 9{ 7iBH4> = Лt,6O4 FOr#hX;V SpaʏPKItpRT8ӏ)ԩ[?@iO7H>hqSqZ}zP 00<1Krom4: 0T& a&0ؼ.:Qhѭ&`Ѱ\tGX:f.a4a_FtP<' :d}U0^8c`TB]G( y08@i\f~(S 8;.jB]86f_FLEI1aמ]v 04xmŵ @qڃ'3{xz8 `|އDSl[Ej.ȹՑw;{ lEc#7=//^S%;BH[)o{P~ƌ^"鉒3-On]9c} $B,/H Ts!u׭' y~A9:pt{uUvF历,ױбO hGtdsze>F&׎J=\fo35i wg|.cr箆Vlu8 m}ri-ߋ5]rnNJw^&\$$tu"S:qi:!)1qouBRU@ Nҋ@': 'Lឆy[pQ/~;Fo=D}˻MiaVa=xY({=w?v@{ (3yp_# NǑoܧf}|n˹aܝ["yJpv{!{ggK+}L.MN|5uTMe/V`H"O$Jc]Qt*5:jD#A4@8XY$8ALI`"]agH7`f yZ톹0ZOsP)s&QO$d㫰 }R_a$z|IRGdn4:~#1>YlIN=:}嚍d;K1Vauʓr+"pQ { JI"YPM&7[Mkh1b +w2u`&.Nx[* ,Cv#ZkyK0t '"Tٹ=n6bߜG!sb02FoC``| f"ؒ PEgT/NCA0NxMzX>a}Go{8.CDEؤ(' P%rB_8!)Ӌ 4W`TKZL(̲TVթA|2.El˜@e9)F_c״'Pl$0:pbв-'fShW jS^8ױVsKcϨ &: a(.]~ Ku %z蝪5\T͆V3vYtA 57کf*v(Q1 m6=>PFQB3CTSl$rS ]AedyV"&Qݤ&7-4N l |x"Be-WuLdUQ*ec(XP 8me벰ػ:BQ(v/:( = l6 |p^itk8 \?*mm-=}pI`[4?8LU6m8_+߱Lʯʡ:5 n<L>D[mqNv+EF>^Wn6#JV0$KY@ϠݡXIuۣ,O={"VY,%VІۤ/U>Uׄ6tBJH(݅{CJwSAUa;id2'.kv ڝVyZOځPM#~"6lBʞH(ہPM:#y"mBuw'} uw ;w'?PDBzmBv'{ v o@;H5]Bl\<,-<@/Y~M(3XøtyY~>K'\h`8@J˼@ty$8{);=$&qTX&t-@W2=o*UN@~FϬ(CfcQ2sA(D?0͒CY=S$E yf!|8%z#Р LN WB٣N@WtF :\LX(ZrQ)P 2^y4Ƈd@}>-??$"EAtƋ!̷0ŸhqǨr|)\,#"Wi`hHn`.Ls7 hEA $ZI !t(4*{wA9 i { H*խY%gTJG-(!5g*״C>"o܏7(@yUp~85ݏ-OI# \usog[.L X5$`Vִ~xnu]yu`Φy%893y,nlv9,-VL9,.yR!z?[cpo.1h_ĝ 2a"l㼙T(* k %e2/l%H* fCu92hmF \OsGivD0B_  jN >Kvۉ-OEBw V}cC!XHIL-C<|Ne~TY&XV 'UC ݃^Bz#0Uqy [ &.׌ .9960n{޾_Y}n.UM*j{ zޜHߑ٤\ D(6sC/BA1͆g,i[][}<%pRPJy' DG69C6*.üd8G2c˚e b~:VH34L[)S@(SVzؚJ_g`$3\>y,N.},|&vħf? xRH^#URIUHF͢Хx}>}ѧ~#NO=졿{WZpƤ}t6 ^8>W`eR7p,hS lQu"4pW5U%i[MJ1@t%b0mC}X#\P g! !gX$\!˼\[ MK=0LD,xeTzp\q~]Exz8mTHK#6=ZX#=>.r4g%$5Y;]k.5l.3H@I~PU#QHoHnFo_J=Ёpأh[A׫] *"!V5IXDYD-WKkQ:LVE)3`w >md9r.g8yW$Z6z2pz(ɋ=O7˱F6#KiKCHe۶,n2qO6Q#g&.ґFۢ|fJq$[rv=hZW}pgl ,oNIF=Gv#|G}h;0G.Vx𑇯p L '"NcZ$񹴈sf1jF^)Zc7۰ vjrty¢k,|6$*hh Ĺ+C2ešhv,~K|[fϮٞpM7b6\*d R{Qر@׀dY&m$>{+Y+ݟXe Uֆ U[Þ#{BC.ԹA$P9Eřҕ˟m!O$|N*uH rpDGY,W jX49E{");B=ߡ6\ƾ0 ]:?C^Y:/[Frav/e8I nfq'A},mqpH &e0 w.Vsp.BZ)ﻃ|3T |"Lϳ߻0|ף]_zt < Xqx N"Mk/f '_e$Je1^xv8$ @Rw@ RjngڰE4=w,k֞w>nvreb0FfGf~!_ =iPNRG|+^oXOo>p_v/z'R\̷4/W'(.Ag ޑs$G9Hf5^zgD~ܩr<>ӏMsgi?\B>6gj?ͯ/P!O|>3Է_6/@Ymzw5yW:+/?쯸[ 7'o>3MS@Gz &OW%']z4XiϡRAsŚG[Y2csm̧cͺg$Y>^;-%Zoq`9P֢.3]Bd)Fϒ<xF08I9fLTLgkXrֳtRQ-8$6.=ѠR="XM*&CL Vaʴo!NT,N KM$8Ζhj.[&8X-YհhpF&b$*beVtrxNlJ}.3FR1)VN{E@NwZc:ʼ)aEN0Bd]ku `z%6r .<750#jƏ6m6(~c=}ʿW?:N LT12P3zn)FHdHӊS?A;jΡ²Pu 3RF,B31bN]S?, vC M=48 h/`g.HlvHg#pLV;M-N kh7E1ѸҴĈa+93\/Z! QV iNtϿ9O~\jd-?kP,XlR\41]u xabg0]ZC.5?UٖY BҬ*s\@a1<֔~UCT, ;;8Avq %P"k3&=y75 }p(uR3E9jd tFCqhUEb[߃7Œb.Sp8b$]!1exU%Y1mfO_c-H04F OyDCW,ڴsEEòJ{HEHzk_Ip1sD#P(?|&Q_W`Þi=4 U%gDUCIjE i5xW.p:u13Ĉ.Z!`+n9km \ SsW0cter0dq@19iY?|-t[+ZB2!yI4V(3ϴ)Ml[yIxl`$]Hp4~pW[6NeZR1]Lї"=L#$ҍ`;~U{!GM|{}}&qpK:%' 'A H8JĈaǫlZ+A/lsd$&*6k,5ZcbJ+ϴPV89iTbX7>w(@kއoz3Qy)UD"pOWug=@y֫D\I3bb PX4٠*=1A݁YXu`0ueV )D3L}2=K0o$`ҸƬs-t{ft-V F.W2sfTy,F ,KLEȓpkjlȬrrph+/Z]dn%(6PyU,04O}8jίj2,EPb= -!s ) iD|H2:)H;fcj պAAyT}dɂ#B)`Ym0v F{ZUݽ8Ê{qJF ЦC #wmUDH~% .+_$=cktP:$V, B{Vr#pޑtWUux%y|H❭ W5qVBĀ4qU!BiFX~5S"t!n,pZßʸX8]M*,{HGܛmiD*nڱԷR`;nYGQ׫.JԄ;V"4M@G'HΛ[&Rqp!ܲ2tX=^`5A6+ژ!ɨR _!H0t#o-B<9!LI^#lÈQeot6HqU,$3nja!>"\tPyz{JI<Y7sM19v>켅?TwL[)\^zSw,c.~Α8_rfHCH4ѳ*d l-tK7-#,yEL+`A}Y^=%oI-l Ih/{wxP)^:GRS9x^p"d5Ռ]O}I _zvO:9T ,]S;YS{1jޓ8?Sɿ>rgHDfm)~s%;~p})YQ {\< |D.IHh5L-Ea^P pߥ[=z#?-# Y oGuxڒU3G쿒8pu 1"kFQ}6aT'·n [9gRO!$0e!8pB2a$]N>Hp?ŊsyQ M3,x JPhVa9l!%@]dC@kpD 7 YF9sGD*6Ipti+nR5(pddXM! 3KYCѺ*?c$]"ob0`z2v(ΈkJEfwZ_HlmMB ãF"+uGҋ5cKhδL2=]YӢwW !Rʫ$a$y|Y_vHs`Û f~~|n -%e!e`2@1r%i;WW)M,=  BJ%r!iDT2jo BsgX2)@kd|@E60t~If4xƅ`(հhPAe )Mg/p)qg8%'V(y#rrąu"ѹcN}LzwLD]QA_.H3efyyHjp)79A^+y,vj B҅I̿$C#f7 ?n#A^WpsDw[lYjm<ɾfeFՅaZ_뭖|OO Eespf04Z/n6Ԧ" vpQ5mڑoT)w:Ҁ3E)&FG!ii! Έa vafk\cjܿl͆s^&ͬ,ИHĪ;6tFMQǯ;(D?r4on(<),d/tQnu껓j2ya"br9B഻6  뀃Po'G4Q32K4Yi3TfIp36h0gFb #$]ވ3_ I!q`r9M_%ۧXǒ1,L ޴Ƅ"J^mH|0V3Qg㤟 RL_8xwq[}{7sWjA)Ij6-|@d_RdػFndW67;~0`<7wp#I>[,y%y&"mHrնAKdx,Z 7(]^ƣj Z[ _:^)k.(sjgݮ/8`MJgl2\0sWtbu}[Q? 0O{?NO`wGWի̸ <55U'vpA7젧W FMþbtA 5R Yz"ŋ؈Q! n2?r{1Tpx ̱5`T)#8W. ?81ea0LI n>7kFoCژ[=XWPO%PN pIJG?3Q'do]7qteC<I]ف0FǑʺ?D{# ؐ_A :qMFFH#,i*|Z @oG#Br1+PO+Orٻ%k`?~1c{ܕ?65o9UtќTN~(E4?QԪNW,W>DU*3y-Y^$+ހ  \pWYRfcE{;F,a8NL^_G%EL?6$l+bH'p2@n@5ja\rT=̍~ܵ-v T^{3ϸ,u<'sHg?Gc?UxϜQeeϜElYW̜WZH]ɑD)F?nq~ e|AŌIy\W9R7uLjU9Q\)C AFiļAڠd Il8:f@*hQq<@JDrU[Qm5n@p('Q@yc4Yϛx&,Ev: vڟH7č.ێ(S^cv|Ѩ^>eJlz'-uJ^@]541enDPI$_>ti{WJUVopwo?M:֯ QO e&m%VJC%=oO:ޟ—_ּ: փ8<+ý:,ܡwJMړ"1\hoq2:\#@1Y% Y+xW* d ιOl_%ԉ\aBǂ勞m^ZV㽴m1ԇ} Mvi8IcdV8o/V΀V\JKfPR͕11 FM| ;_fy] [8$_jX쉥=Oץ5s߃\P*+p %8_ׯAq _QR_\~>4mZ遣~8fӻWp+{ 63n(wMNWkTK*iU~`H=nÿn6ͨ+`ImYΗ_*RT Hثn͕_ ~PLw\ ^[sJ ZY6`XH EUaJZhr\gpe!*@ 2k)fr͹i@NhZjNІb |K}WD:K3/|ۏo{f4Zǎ`rl,LJ"x4+m/i@JmM?ɮfэGw#g#]vX/}ta=hxkjaaY98BڹW]q߳J X+phv3}pgm[hOU߈ʒ;Fh/|]_?ځ/T%.Heu `WJǖirhڑӵ4/+YP>/&ߟ,蛼W9kBXr8wA2lWrx(xfA:h+:㷑QsѬnuf6N0'y-1R{wx,:kK^䵤(YR͚l.SOӲ;s91q" `Gس}l\M E"˨žj&h-C?MNhsLjGw) 7#7K8MysU7kֱtKS-k'T@L̵"(:\J!2,Ӭ 4UMNٵFџR7в9'Q :4o9}-@! ?jTȃָ9+~ ?^-# kbNT!su^G_7^u1c;`$~M9[ 283!xw>bT?bK)ϗSŚ5vu~iFh'=7F b*PѸؐ3AʹPj= x.@ <zEN!۠1FiWi(|:᢮ N6[OLPota^"b!$g 6k=E&*H:'{x=k0;~NZE5mi"&!b{\#9';e 4B(фgk &Řj6|@FHZcT ,ng(gkjw8&y9ΝuDO9af P67A@4xCTC7tׄO6>MsgfDBFV;aX#"t&ܟZi]Ia6.WP${AN"Kacz鉦޵q$BG3~7qW.v /88>h|ȡ$3|h8 m3ӿꩮ26(L`AxL.G35H;@LwOԼK~h6t4R_Ǩz옜R;c `2x޶ӵ5ZHt"1YeޠivI=S$}2x ƒ *uR1[R1!Lw,GϯSIF$+5a:$p **4xYP n ChBA"'ߋPivhNHoK zoC.lUlmKHO7cFӶbf0Av{w6&QQC)e 00NGNc8mHRR\JБ{ր}5(`~V3M*R ^p^ ˴1 .lڢY[߫ }f HGدt`i c`2ӽ\q;Wb^ lv+ysqύH0/[7DVgS wNj}1sk&Z60Zsb7v-nϯ5L3JOX8Y='<7 %($9QKԧ ӢQ`U,j)AYn5Go;=DE8fM</EqьUlBnt|:[=Ëvܛ.nVJD4-.E'$ƃTJāHMs0)8YOd6cm+L֡{GL͒Jg ;a$J;Ib8t`F<+yfp{hdq6l^VȖT^-PDS%~B{H2jH>~.>4,KQ.6i&AP *C'y 2/935c hdcL$$\ОRe,bRtB0V%:uRtoJFQ[^ߛeЕIkݳ^@|' Z4pAx݆5HQM(RC.(5D JF>ΖeJ(u&rJ(I; %p"ZVG*CQ W8>*jb ~/)z\ (Z3/)5dd NU)psmۍjrxۇK`;iThjzGZ55)GQ[ut2=P`mQ͔3dOb6<h.Z7^Ʋ\CZ^؛1WnICK͇ڕU k9gf'B.}Y#|Ȝ[A$W4%٤7nt[gFi;.nW*W#g/fkͷtI2d4.C#FNIQH] B5{+ɝ&9]0Z QQJ4չз_z_>yoӻ·-+Mm^QX\kjXCyjsB&?{(k\+ LL{#k!fpLX menvVe*=ʮ\.;o'㑿\[56{?eve΍&bc:@` uYnn#hğk 1_}]+ mHÇ8*?I}7 woUAYB1 4dP6g~5vQ^[JmS}-gjL9QWh`\ =|oWnCM_5Sz$am45,e;67bĐ CC5乇F`8Az&ݹ({#lW 5J7Iw.J]nË3q>\vve?޲LPuJ*ǛRqpy:2Y-Z[T A?HrK7WxeO:Ț`K_eԳ&ݩԚcͽi…r>pߍ[әXGZZŶN㉷݆Hx=# >TYį>[4c?͋5hh5NKϤʦ?`q1[]!@X}xC?/'巵s#9G)uY† h !I7GEH!cCN֌VKۜ,# 2N0-Si C0m`B35{:Y5l9̐|mPJ>ݜpLKf u$e .eYDeNh$CPC J]YUMUZEDRPMת=vnyڼ1 5i[I$c9t$/O y/$Ǭwsˈ(K}7yYY7.`={|%Ŏ\]-%kVSIl m>&ى@+0 ЧԮ*Ilv3{1$\yuȈ1Id'51N$1?3]QeCaksկ^k/M'39G+rrm~+6.|ؑ䳘;;&z /Yoq21 ChOJPF/!䱤kW@אgEӽ$ MbPO)0YwY. dw= ؅Gm ϖnE*&ZU֦X\MLj3nev2;\x8G(IUEYQbW^/Zi#w<˩t׹2,/ŅfVR./ĸ>,&kX|oӐKE܋\y,{GiMN׏mנ uI! ŏ_8HX *WHԎ4ՅJ2TP.iKEGQ Z?mY3jֺ|k)؃GJzSwR#W/w4[^IMo\TKĺ"ح~'g#U"l+3fs}߃-[< ~R7Oˬ;=2FNTh\~w9[2=6/3]Y*Йh)I{:E+ PҙܭEpL"ImAzOZ0A)őRXjqG-GЭh@hU6FPvLgq2 p",YìN*GV9xSAMNN"K) უL$H~OaA0HH2uEk.A TfDfB o4=$jHaD 376z1!(hf%\K4")qPh@ Rp.T!uH! r>DV fCJZ%osNS e$J!bu\'hF!isߎm>P%hCJXRJgqedR+4Vʃ"%.iƤs$R-m$IF.pZ :ȅihPv,ȸoEe~E^S,NHnCk0TqLD:V  PSWrC1& J:E} g3IPVGrJ(k(MLだZb:D)"҆` .QqBqu$(6IKpp6ha"J(ajh) +k ANe"ztѷ@{$kj9LUBNЂwb喻zy(B:2+q0Fiar'c(ڨ[0#Vd YQ L8Aq52]fQ{ls t[LgZ۸_҇dG-XTɮkˎ32^m1Hət7H)"r2ُBQ:lV!-6@#ZPz*<0B2")il ?!oV-}-'<"\" d,%DxX&lʃ&D579HVGZS0(FZZsw@с@ZcdgddkHy4HpIw*G@mTzѻ!q2M! [H r  "(Z:DjjAZԕ3Ć: VcgGv0 }h.*/PޛT>Tࢪ*GsQ;C]NJq@V(8PX X*Ye QAfTZuJWC@A;t'JA 6$;t&>X.pMKTP*xz|%yޅLP+MmjmГpxB G^ڞ!JZ͍p *r$44>p1GW(vjnT _ƂB;EU ЃF>@p$5-T^@1#&U:XFi'hLUpcUS˹>9֨  6cA,TGg${͢AIS6e  ADvc0xw۰~Q'}$V*<յ9+p e"z%U(W9 (TDySwK })t9ՎײDH@n4ۍEx0FZ%FY F+u@U0Jx^z6cEmȃbFiQ$ZUFRU;!yAjHI$ GW!Icj6 s$0ē4P*o, <@- kpwЛ ()8\ b,aZ;ZK@F8s%d.ZS>tXFHY3&LckQ03oA*!- XVc-@;(ݍpCa ל*! #Ő 1T`nB;yyi82-5AټM+Geέ3 .\\Z2n+lƁ->!a0aamÿw4׶;2VSZһV e 疢dVQySzi6t5def2Nc8n T5 Pp"B{^iWUOHYE{ucP5ӆ!Smz&H"X WqZi#ۍ%@*A6 zȕBy)Q28{2xPO,4"\В!2D fJj-Ƭ y\j jLDC,k<# iMJA`$9 (q[*\<輌[n%AW JW[:ST4UfEɂܰH seTIS&35W(]J(%ί}i4"36Ç:Zj;]R3$-'Mv,.km餛fʣk8^Y8]|5<]fq^b4S'?gO5+C?zl#F?[& Nb=%A\7)}=̕Ɉ}(>,bQ jY߰02ӝX j\Ue ھ-WUۓQcZpwn~բYl-^j{4r<Ӷ֖)T%HǕSѻc8^RǞow4,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒,˒MwJO.ٻfZ>dNVjϥ YwW}7tHRAtWu8OދZ3K.1ߏFFe' q||Pumy\ٍ:۪;Z2m BܫNV]yn *lt?GYm sh&!?.gYӣrXwm|H#% F͹nyEIy `5vޅW4M֗d#@/^'W{wa|?n|FeT eٙ$~E|f6}Ĭ㛢]%m]fMsbgv1A'0d֦^:84=ݚOq]2G%DxWT{Q15ZѰ {Ў! yp|"g˛^c{-# cN,JϢ{ h#Qi/p(\EYEYEYEYEYEYEYEYEYEYEYEYEYEYEYEYEYEYEYE韻(E3G: =Mv]=ڋ}k44Mqz&yyꗓ,v*/tֳ4p^r[V•k;N;5N|?~?ęɊLnk Rp_{͟ozVװ?χË"޽˓8C/MzݤMg`dǽD&J ʉۏ{ .Ͽ9I[VTr}l4%8 5ts;8nMڈǯ=47M$''z^ƤC|?[-y!GϿ<꧴. V s 2BTJsٯ +%Bu!6-Muwe]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]e]M`&I}`$OG5OFk>RWY>>l,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b'$97)ѹ]٣gw3o~Q>0iG<(1,OH h=m5ܵIJ7"ڤMG_GuSgR]~uHdJF(ԑd 3fw=y1y1~gO\UXt7RCh4.ۣo#WѢjhOÑ6J--߆w[Dˠ(tyߕ$(^xP*qVO>N7~Z^Lxb,?n\$P~J]xHmr&(Ise"+'4Jkͬ&I0 ڊ=V͸ cW-g>X!=o eE[]F*RxbJ'AL#9^|B#K0Xӧ3,%k+,%+#KydicpHR .h{F7U~ mƿIӹOK  ldXӽ알gMt`wu{3}s2JoOήxTGUg?gk48@#hn!ւ },AQ~1_52YZki9՟ЀNTk~nE;.t.ҼŨ.c"dedPO&vGMiF0V  ,|ZbѬuy ǥK0Đ D 'Eq4+e:r>z|۶y=|7]כ?5?9.M/\^z߬^èϦmgouQwv/m=Ly/Gv_Cz^"harW妈*%U) @U r5v.\>|{q1x.h$ڔyVH+k[p.Xsn|:hGV:-j/ NQ׾iҜruͅWw9mͫWyK6O~v-]14? +?QnG6y|ny?n63N]+GE{`#kJ>^?ܢܳ6#+ƜGsa/>;$muɖG]U_R)ҒRT:)֩l#$`^zDAPӸDi/:DoW`{BuC%Nd/f>揧sSQE<:4o"Aj,h9?4:o>'*:l,/ /vϘ^?_UԨF^UF([a?2fD63<}~Ϙ_?QX(b:,+mpȥ^G{{w@SYX"c~X4Hw.a9[r/բG4,>b##v`5[^e u #YNr o= N\IKcƢCpXrE~}gyUh-jH߯b28A4,>g?`!.BBX9%5{MI%h適A=2C&ӬsfBSBt ̌(X]Nqce>̀x3|*w{IV.Q}lgpaIp3_26vU`8B[zjq(x!ٸ#\nh|#ǹkpt}>(_~`XǚF MLFam"FiBqK΋jޒkM~wvI0ctt24R e?^Ca^f>qK$meJKHisx7ASK22)36@zB&_F٘K N / |*|v_c;o딶Fl58vC~>`}#rʬK6>L$ {a?@pc2 ,󩼰MYA^(K&FdVT3? ٬ln3lڿ2h!%{ͧvkP3|67m,X4 RX &4;t c{>qh:Yܾm.4֏ؗIlhkN?=3d2m2RQlI24<~ |*%%IFsɽPS)HFFs{!2orUrm1e?:IVƲ}\4k*ݚ&Vukʍ *r/SıPes. 1?EO siZ8Jƀ(qq8r$_igx$ZG{-dd0𩼝1EP639M.IZe&>DEx'.V2E%r[jn`aƘ1 ߷bGds)Y!R:(f*+F&l2a%m5F8>SZ3 Fn>M14X>pJ@ ]^. o NШr>P˅ ?eRiL4b`)EL%fC lggXm,Hrw]\;1˜Bb%:EޭhOd2!r(#IUjbD}9 p0gBPq 7g8h*;?vu6# 46ɲh"7c9jai81:: 4`P_2U'yr/0V\kq~U7r+jxr8W>( gF8^^ 6݄hߖ[ԫieWg%-j>dAM@O*=Ǐ]fjzDBwFŷ 9{j [ᠬh+ι5ŪAXun3ili-bD ϩ}2əMN hȘe|*?|Qn\N`ohaD%c!"?֢GFS.Pp'7%>(U~ r蹃cbފs#mJH* ӑi4C V3G5|t]9.MXL rrV"9kƄNu|*ku91J _#aA;'{mm$=_{CpXr'9!VlxI^.wr]Z%$cKp!&( Bsx|*/vl|R#jEA*q Ш0SijȾm~e%r!&!W{H|ehl#9}>I!cLnǞT)B2ƏPǍFƏnњ|Fרp|%Z&6 Z= ilX9RK*˃!4:)Rݟg|;B%#iGӼVǗ($ۍLJ-[ "AjGC|qw @zp]7ମɾcW*tDr{DG ^4AY)UwtJ, xZ}+Ӥr\ Wy)SN~#J*_3m҂`bfwWGZ/O.ц~y ģ<W$QmwЇS@xBmwӸ`r5`+@h@mq2YЖpQܡNYIFs7j`=~hcLȊ^ \00v~|,qHq7Rf/9Nr 2bwˣ:-!;y<- uW`ck˫w'.z Vu4}&JmM㑍&"✰SC4 ih^Μ׆h؆ yN<'Б@|_=BaÛgbl=C-;e?m7_omfjMUXN4 ]G3]Gl4Xb2'_֥sF:p(ߊcl^:%Jԯzr/7_WzLC&#nGe&" ǪSPu >ů^ô{k$}>C$T>s\Qu/#aY$N (Owg6`tOe1vnui}8w4ө9!|Onv _YFR^Ŋ_.FesTU嬔fK+h^FTeDq<9!6I֖Nw__Afly-.QT34Hs$= CFOJ`jFC΁V˪1(hFŠk-hj\agY씖"ǘ<}dP.Bc$β%l#t]~T \_ 1#T5Mjy1 0n'rh[ꪞ> K>L{iRmͷ_fրZz膴GPa7fƆ77{ ~a}70lVgc%7.^՗"1z?5I?KJlLn/Kn4]G?S>3obĹYoRee: )Ӷ;ɂCf;lMEJ͂G.Y׷͚ {ԝ!O$ QL ׬\tX[5&`PG {R!C;&ZZz滽 KJ#DqWH.ZP^N nI* 8ZKB"iE U*B nr=l\nH:znQ%ތ_ \i3 ^5Pȯ֔j*29MErɒ<-Bw"ia J{fc,hgS(:rX\wXZ (}[Gb  ţs<`H7I[Nrrfdi=MVXwWc>?qC^K*}tZ Ukxy>b}XŴ*:[)kzAyyw {_"DDO{L3\V]{m$J#eZ&[`v6ngv힞!Hɖd[KVa.UQUdX,ԈYXBKB(ҎykaB@o \| ke"fWT{&xgwn|Ft+@~ $JrLX u8rPyB$=wШ GS\a:{Fgʭ2rw3.f={[1#lνe9RBIK=q>\(B7 VzĶ-vtV6ǂZq-[Jm+?AltMsa"X{Hi}9DDdC 2ٵ{ew{<`2 |(Gntmw~0V):ԎVtV5 AӹM p*dfՙ)ـ?ׂ*;1MT#*{݉7i$(e8 sHJsf4*T ,12@%PO CN# La8{{XA1-g\`E}aq1-x,Zm6GTH|(< \+WTg9(F}{/|(S_JRΑ$QsZѢ##P EnZ/ܑPHlqJR~3-JqP*,ǘy>kOh^ yz^- 6O qU Ԣ6 _EFWݙhh t}Π1k 5'1%IXrs+8C24SIT qq!4o drN(Ӫ}?abVswQjMx:h&F<WN0" 9Ɯ=B9&n.$fIb=!x ]@29eyAP;q4@CY.D~6:Qike#;3+]}g99;T"gE3β*P{=e^{=J-Ri!JRMaq?;(TW"$+P( P CM>:IY"ˬ[f4,h>d<χDu Pϛ2RVӭoES+u!H4";" !zUGl|瘙svUiIcťrk)6/ sa༦تTo͑ğ ']SK2ccijn aLɈ!zGh#Œ/`żf6b"O6#ACy;biCɗyz2dh"fVW|9D!$CZ%kKuxCBm0rWxmxX\0}/h@gkN}z}L)-['8^pap38Z/4-RM 5H$zkϕ(wؘpen%-0ulpgcgVoXYEcϣwM??..:hx ypfΰ8/nm 40df2|(%Fms334ׇR܆8#׵ؓH5nt__JG,lmГYm 5Dik=\$D08"Wy~^PY#Cf?;yf)YO(454[[ ,8^ W[A0N$ Ka= h (^%Q0T0h" C'w6ho^ ٢I=~Y=ӯgW^=!^q{@u7 )CbXO{BU"Cd9)$NTJ%Ob (^%Qp uSQL-"M$&j &斳ܟeL,[)KE ]%KMbh0z )Ht5hO5e:K FZ`Yc-[&(p%\<%H%+-{B(Uw e RxHuF`ark֘P=DxDSTFx.}\?:u %/pG*/@fŕvP&KG3&zuyXF7AJaLasAG7%iPS5h[ ]I_V.-A#!";9?RbfG:i=] (֠= 5]NI< q7,Hf8.^^U$^弖w7uz <x`3B!jFØbo oiI>ScRg KX8[+K"QڌeVKIUZO1|_ !, ՋkeܗTbw׫U?=S[ww[C{Vw;6W;oL.4U(rUlsqΦMc쿗:net53C&mX]ěNL5~͔/S~6? #. XX7 ɽ?;d2Gkoeʻ፳v9||ؚty[l7/g{m:fi^O]LЛa:=AoS> o@iw9g%tWfxlp72\z/C~{@-߾p6Fr@~xD=&dם/p v1臰WV;a2մ3;d^IEv&,zաSd#:ovv cL-(UKQd 9EU;q/Qڇ1ɚG"\sV݄x)+#^+%4Eģ&K%T$i%Y] bE_ҚD E53{ouڇ1vl.0~6)s)JH1r!L"E/o~evL$%HR. 5BЖ-r'm:%56 [)+)^א"jØC4AMΙ_W=$'ri!gN ndve9ܭSB\}.!bHnZ8A$Fy*b HsYX- 3y c,H($B2Ex(,E,3)bSGM*xN znQ'-ƻ \LqHusFHe*tx$\?R%M㻻iP+W }Z:]i(陕IͼZD 'mKw>8Hj)|E\ cCFyѻYh$ۑeE0۬Ż<)xvj }Ыm j0&(ILlw/-=pHbPf9Hb˖[>X?zJF|҄ϦtymP |HT.3`Կ^cy*mCmyl%7,UjJR$ȒiI3\!d 6_-4Njxh_x[XۇRu({QSpvinQ+XZϛO͆[:esaRb$)eo%!ruJJ$q_CYV"ÝN#YXͰ儓4c$FDp-p,DRsGА#d'u>H,KAX{8\/@KVxmovfq3%߽"! SMyl%;Pxq dV6޷lz*2h$c)47W4ͺ0Ў~$n_EYhbLE 3u8h~dL070LvE]ieG,:O4 UI6cuhۡ/VGBF5)mmf@Sױ' b ElP4J}׉> }4%&MPܷTa֤.{؄-{1.]ȫNLhF#uEXY83|{#7l|ECA9! w`|Ek㶊c3E%l0SZ;a3wgΫ+I2#[?k(ΐ.W4SIT6{#3 T2ΣF>aL۷|edϏ=y Ӥez#A{tH {süљqt^eοtʐPsZ :G8MqzwocP-#3g{ez~gN{EwOۺ_W"%R]lqhh_n((J8%'E %Vl+"-`I#9CgÙXbW(IRgiR^n屘px&NA>>tQKj8FI)3镗GB,px&~vT/A9px&N*D#j{/? JbM$finX4qYO-,(vg.B tLbƁHTFsYZ*3!]bK]ωϗOYU{lQx& =9CcAT̘T Wa$0ERjlhseǝD0H)cbcD]KDB׋] 9cO'D6 ̬I -j4.,)>p͵}'7{-f^7ܜ3 qʓ{ =_IQj6wHiUY[oT^HQY^*0yg*#"qI^'ԷOw t* AK44ݸDB޿㝣𦣣:19Bo1"1bmq 1q87h -/rE8Jej#j +xp$&y8&Rˌb}1 $ nп0}Z Xee4TY"\ꆱ"`Dž5ʀ֖|b !VQM!+R/!4 U{|Rk(/S*f"I%=+0ȴP740C\{"KMʀ5.Q1Gw#"bJZ ]%3Y(rTѲ) p ˎˎ~ 8Epʲ 2*!eo urs%RPDO*4L]DBِ'ܒMKIT m0 ˙9=[y2*,"E&`6Ù\KJ*4=v8<gIk3"$n;Ll5"4Q$ILRSLC! l)Bg"u*jNlR…"%EYRRƔS!sy[nաL<ցmPxkO#ʜ`8<g٢|dI;: H#7!dɴa$Ӑ\XiB3Q4[#']UɘL'L 6=64>Ӈ%!­ӮvB4C<ӆMyLdwzgxK9"J8LN*]*tZD=S 3o3)S98Q 2EPj}x&x^&rI6"V*I L`Yi3CΎgL3qXx*sx)QO$85YO-k0έ ogQ:{SÖPlߠoGpLSsu1o S7[ͺ=7ʕDs`B|`0ȇg_d<Υ^{;{}z (. '>h㵝t^%gхձU},-VǷ!z7t^Rc/| ߶W3p9XzK{ ضZǝN%DVܘ ӜM1Jv+.&eڕ ջ/ɑ͖NUXp]Dd%V%4=wՑ]7.r_᭙R#xkԱ+r".G&c&LN&DY MJZ*y **V*Cxs,J !8g6Dfa|}{< d|9g/~Ìލ__q2}lWlFK*yfW]+f.=4<- ,^zM=m]D8ŵYFP\0&1cEtʬ _ ,#cg񁪯 1>zaXX8DL!;/CdžE*]\PgҞzMEHH*V22G*x7?U;bf(69n*9/PVP,Ձ>=fJ1rhDZֆ䘦VlH#nGGpbHk÷ӕ1qkMD1^)/#8thG_C+ɶVd$h,/VBB1fBE7.3_?,#$\XĜI#fւov{΢\9F=޼TCUXD{r&ᘡĩVFkǤ`nrL:7Ju\#q\!pXzm8 Q:CQu  &k(VdqӈXk#8ht]ᰱ[)^4c?u3}aØ8d|ڤZΛ1r\=XW=f1JIQr4{%)9Ҏ[jkM9ZWP|ΖBy0zCkx;ܵ07"1uLluov`-/G-XBH#34Ro 1-} v8j]/>oj׿}`BU ,G8hK+GȊAHh["迡"yyߡrE APDŽ:!"XZ['PζNV>tfůE7񪲳cա$N|O K]o7]ژ鹿2ImFi`;C< J=k AT8&֗ǼACio19Fv2$5(gQ-[H1 {O#{JJ<6"`׫yfn/z]y¢?>o8fԍLfv8}fχw. j̳oX >P|a˟#N 4Ms2(Ѕc;|5=%nz/כ/_םYo ,&jhVf~ss;v7_~YogFݥ]9_lB _`5o'wC7O~^˞?= 'ƻ+8fؿ3gW0w}0$cqk$X!!)xʹj˲ݯN2'W\l'o^v/V]Sb*_%smf 85>[i&2caGnGyRr_Z‡S:w]|K%-}Xtu;  !̑fs%p(Hg8 G^gh(—W+ _|;!:koMitqboQfñJh{%-T{\(9\AХz?\ ԈGːc"Boш`ˀyiA~8$IW1@B{L1*JaM4Td9=-~O>ؗ2VQP"('ullcL釯/՞!'ne\.r既/`*嶄, X١M1A)vhA PA /H7!|A 5W5sEyfIH 3-0] UE\ KP$<9 JTP9u*f(u8uD:Fɕ0:z&|BI ,pn ̙Cw:%1K:7+d'l?~3LJ8ϳ?_eLe{sq'|dZu`(8HA"ΈGV%krZʇahI `䄱Th!|A U7=J!!|A 5t R[WKo-դQ-ڶ88U vQ?[;ַ{U^FX͙^٫D;6~ ˼f[٘=G119%5H1!{{1,Xkg?-,YϿ]G?cE;[}n>Jk?o^s:ͳ:8v4kjoÜ l8f(qȈ\d cRK.RI 5/3MTKɄ2H9EZO +iP;Πj4>0S*ˁX(d`6Ycs!eu=$$6ѧί3Xc_{GYDWӋsd }yE|9_m?_dހNL1f5cN+p>)$k&2TL<ei~ғkgJ㸱RЗĘٓgcg؍WiuD-%qcԭ*)Y-Qu5G|?qeiG4Y:͢!k|s&/;mFqހpt1$z=%ёI*uٵ-TpToB= | VL(F<0$ٌ v1|T>.=RmTWf~xi*kҨ XVjt #(x(SKi ZT'ڭhV L2Q"dpPx4|1i6$1$ZaC)ńbiL*j*: 1,TR_RY`VIޱ}Z@ Tdq4 n8 CMʾ-T!ځP\y suFxHQS;?O 5gn'^5|Ʌ/t4hq.n: )m\r*7|QӎIllr̕L>MQ\9Odqߞd8]4 - !hyqa)9,98p8g9U`#>[kXI3}&sBi1bYאmhwӎ6v7|PF֢ZY&Te(Eb $$yP :n4B -jAu@?GDpL)S*DŽ"[]~jn=!$`R2Z͉p4#ZB5[;`.ª4ib8ȰPgBG O 5MHpD+0ʍl3a\E|j j2r+6!ӢB>>- ^N鈢mm¡'jrbΕk\>- ^1].AJ4 *C1 Xh Jed\7O%@U\h\f:B v AfIL  Al=ZAQow6Px[3+a_(qfbXN\=Z@P`hUS"<%OP9q-T!ڦG.`h(>L I3h_p&QXWTs`:>- ^ԗ@EJhDK]L 'O 5Ak:t˨sbXDGaT9b>-T!BmN U'Ȼ$eX[Z2)>- ^#kE ǘ[0sK=ZA`6FMhۺ(I%,˖O Uvn?GІPU3CQ]W2i}8C|:?6/^iZZ'w(Rn*zR,ۿgu c{燗;;<׆'I "Z;y;ZT_sS;s.2.T^`3$ mIf4u^G_#ݫ|)o8 wW2!KˇFálޚr2\n7/-j( q0bX25 mQWeB?vuUF zPW_Y2v.حOvSxӝ^9p[ pLp4~m0t˖M Ph(@&SKRPWiLZ0> =KBoBp%"~gxȥr 4ƕn ZBfg0!ɃCVߎtծV&i-_kXJ]_QRS %2#A_I18 ڙnZutV&|`!m1QٳS.ŸǷ$l9bѱA9}Yr 9sfV.@'X냄Mz |.VwY]*8F "dGK&(Qr;] (e>0x,iL6;)IQ4;:/Wұϊ%?Qi~z{=\Pݞn_ڋO <۝C6)eXpMSQϚ|^/9xC%SV mYS}Et뫶1%TIFxƶOl/x eב; fzބ:(1N)+|7x9oNjSavfY9nDڂUZfʧvw!~Cth-YU2gF*e*sf4(TZlIAI?dDvت.}*sL=񰭾ϬEXmVA66ޮ77M7'n!+JrlsEw86Sd>֛-5m3%B,TgI8FZ|k75l? OVkMe<>{tw>-+{97R(#"U䀜kQKT H0>YN>:Az\ss&<627g)<5ϕ_Rl mMZ6rMzjS nͯ!%H,_3fO#0;( ۜA6fLC*:eY,{HE0\j+#2$ |6 y1ztuǿR;Cҳ`~IMo|e Y+z OEg6d˳-QҞ.[-˯.x+ _$FmP77r3 )7r3 YeFv7A6`BJƅ.RYJ_Pm2d yf1!s5ǵTKKh[%V$ݡRuKWAdjW[)jja}g70QENL(e>| $Z/ts u]"s[NVl\箇<ʓc}p39&FY.>[rКTVڊQM^9OU Z'iM62OjUrCW*4\j\Jsh8;N*&..7kDmYFX~\s sո`k٥<}pMn{WF2 X5XH vGpm%ŸQw-5ruy~6!uT 81K@=(iE`),_&^22. vE,9,T :&96\Gl,Fh4JDr(IaQ3tJcPȸdwp%a^*4J97 pw9 V%\ob(ѤO>4M&ic&qUA8JK*G[%Xr+ev'.gwyHJddé-5[iooݙw_qn?i>15WyJJ_=?qw5vh UI.3$Il2A)}L]7#?;9G9McKf5&9X\4yU^@ukEW Z 7~yDFxw3[Ӟ>5Ue= 9w~tzpmFrw>^oaowig (w-ܙwfo7כ_l~}h p9#8G?LV˹]JU5'ǯ/br9P''{:Z _ۍYf7(1 gbQ'O\9:B農WNouN^jʥK^'%2R:>=9 I*uEm]֜L~>FlT!O4mlt< Ȏߠ::|uW/^9d/}SJ쨭 @ p^zI 7w- Ԥz6=P5ٵmmH= 8/GPN|}:89:-hw'MٕKh8 hZ14aD;^T-FXkoX%det ifzuþ(Gr9zf`Г%w)=SnDJ\ǂqC mXcJ̅ikRYn,e2v[{֕s=|g6=wܕ1+,e0 ej1c&ˠqdy d%îL]I:8ݐ}xJN{FoN-k6kNwݛUޣ}e]-*aRFh'זSB׈.gV$Ivo^xIp#Қ.LNj X"8˒6BL(93 YDhupM{G=~d4A ?[zZ<6ٴ-jcVyb .9KxPcik<mk0mkD{k![YtK6`\6 pm\6=faZ\g^aIФD' ĺ0 qpunGt/awW\&md'-,YKΩRŀs6#Kଳ")%X[D&lFB ̺2, S2Jp:=5$N"6VDpKdh'Q8aZidJ(yVY%xf1P.5 F*)[4Q4iBRC9Y=. )GR"J&Fgԓ*8%u.z#:rPʈ C7.6^^/5LvTz eCd|D}ps]i筬Pu\=Rز^Ʋӌ1cXzWTVAe)hTQuV%c-*T2d,M6@/ kb+EdА[c "))iGG'-p3N_jVMYwz/o(.çg'jaސ5}4W,nR%8?k٪4v: fm>Q$ U=1wd b&i>u81CWL!QO'_iS$R[^sX~_x]b߇/~[M% 8טyhp*|'>] S%m>ѐJsשd!n*ycՉvަ'xZhxJlU U;&P XT ʔ`~ )&94IcWa۩'H޶"hV1f-05`9zԝ%78mY|8@ZՈ!궭.jӶ 0IAu~e!nnkwzBJ|@E(|@>J %G i1uPOpLp^/bJ/ǭ$@z11v].Ƅw?no_FRP; Q)* L2(PZ#@MwSs̱iujz1.p-p!Y7[֦f-/f6-l(EhѯNT}\پxgzJ# nQ+r>k؁wo\糠q<{9jvX\ GsRS=n:}j=J=xtHCb"s*HKa8"%No%NV xӺ,]y=49(~M*s?0>~/cO>bh*UV=Q ˱ӝlN]Ԋy4鍫ϛ&g߿5ւ`E]a~}yWꘁ*mJOӃV ?wT7^O>M/,ëH؂3ۏO5o^vi?O9s{Ԓ[: 'Y0i}*4*$7.f/u't'k[%Vgꦾ*":N{a6GRoK>b6pz_|T.ͼ ۅ?8~ɫ7/O}ӗߟbNOzrK{&u&ܙv"O~ؾ cެi6M&{=erd7{uhV.uMn5(Ppe߿x'aL;.&j.e Ʌd4?wM?QQTq*wV+!mzd3>v"5_}th!f#žG H F,'vjQ!hBe;ejٽ{93uQVni`6#8=;(Ka@>|2iQr'2ZˠIpL`]Fvear>Q|mk+n-ak]fFEVW!s g[6"2J@)z9HrpVW.sbi\8Y|_qʥ:=(E3ͫ(FIx-mH¥ۻpR<~ 1Sm%)ࡓzNۛ}a " # P=Lq[zס|J/E>vuPP+~*Qɳz)#R.r5`ڥDAȼBȺYڹL\-#{@N+t0Z"Wj1]K'*g-T/ҟO:blj q*:~heE3rɅ*NSSYaU]X;#ϼ(Ɵ!o53%xk7xcg10~e)!iTx~? ^z e݊񿖺ХD*{i5 y=ϢMd@{"70ccx3 n lZ\ ;5O;w!}Uz:Wh9?]!`slNI9 6'LrlN⟓`slNI9 6'$؜`slNI9 6'$$؜werlNI ~׀nX_9ҝHzx z6 t.iRn;&xzJ~ƲCp!T>-ߖoǷm||[>)B=ƍCMi8ޒR[8zΒ&:$Xq>b+9?DJѭA̢re`a.JqTQ( Ȣ`;h?rX+B"ɭw{TZްS*S AԣѸJ VJfՁ1bB2U0Jξ|jsWc#UACJ:@@S$)šHG_-Qo<3Dݱm]m?dImq h_lJa^iGjd^ wE%K͎f6;6h=%CfC!r{CZH`1h؋MGxA;2@JF/tJÃ& jj@Y^~&êgvJ~8jLQLiT R ګTQ٥BB,ZZJL)0"4J= (5S ڣC^j'd =!"}ZVN: $)qJ7Db΂wDðq$Lnģ YF72."c_A)wg1&Myj?=!iOhbᳪh5[A+^}hެQw8I3.MF%<P܃%T׮Ak* p*ZbiKKElH%1QMKc5o}RyyHOys,*,aLFw BDDy)[ÂZF Qf ~9(k0*ƁW]ex/ZT2}JlUg_&4<0Km*CA ƙARLJR᧥ғ1NDoj ?ƐfP=o.ķ:f)}?~h?F(CaAM+,|̾Υ+ aUG:LL R%$/V$ow;@j;E- 1OG-?T(V߾JmUu5]rz~ 1 :nr逤Bn/`X|]gZinتݟZ][/Z?e░C| (#N{m"+- 03>pILHpq,-AȝJU?<#oQmCa)SCpA +& Q f1؀j4vc)_*4dX(h"`x2>(t*R{oĎ,e J)#T;C %-W8o)* t'(3N,=jmy͒6<R"4*HRH :9b nm2} AX(X!^[/BܑB=Ame؃H])dcAR1(fwSbiZ .Cx p3J+Cq6KwzH, +` fw4;ϝ;%PPހ9AZj/MFfTjRFL{(')1RE1sve S,wr7qk*xܺ3N8D0aΣ=)AHKa8"ѷ[/_ldz Hd 2/"kCH6gs sʰϱv^=85>Ν9 UF}R1d:p*w`CXl z9$LG*#)FǤ"fEwZ)0VAy4Ck@mc : CTOmRAz(xJD1p[*Ab"x =C޵m,BKqR@pup [K c`Yr-{wd)mŢ,_6҆8˝gf"I@ ye?Hgy 5^ӄ5)qD`36%m)xX-cBRY\)ׅ =JZI"Iг~Γo%Ҹ0.̃RR?hȬݜޝWr\|1<``%;JѢ ;}틢,x1JX+0GRTaHuΤS1Qh +ޛW  85iatHVva3sv5[ϝN#N 8/C/ UH<\Reg{=bDAw|qdTԊGS@rաɎ1!ЮŸ/ wUD ^byR,itQ?[s}稺f22pWf. {g_깥fn\U`80Wf &JRXG颉 i$I3r B`X'4I Dnh.z[)kJTgl4VETs]f2sX>? \zda,.uW^*$oozvz_`9ۣw7ͻSLOv4@,|ފʷ``χ6' [z3TMH6G= r變VzN2{nVDIae>Sͮ vy/ uިpWT\-W\`<]J(rFs=Ҍ`WoLKa ­A)!=d0ZH."BSϘ8P=d:FY^1S >g'Ag=i7XN~ZImKL˥t373r4H4MiM$' 9O_3*j5~j!GrdFǓSуKz+;g*l.8޿ݎڗ;nghd;.b\T b%(JΪ}j9} :ݫ.T.$p`$Ie4l֓<g lWY$0v7:(>֗_6ע~ΧJ>nQkdxr5uI,XUs7%k㻝Vcq=\YM {y MNW\-Ӧ o,T#t F)* <)xAX+0ć+UUi5-ũ0;-ֺh4a֦;ދ ,<\j.K撫j.KH,tgXST¸5i'uD%Ԗhǒ M0&3|^8pi!Qa{1cč1e _Zv_=(v pni iiwn5 \0s_]pi/9hg[FH(% N)bxЄA]-T{(k3zj1/GpTV!~ Jww> BMz+d( ZsWNܶBCM#-y2c"{&CDh vK)9 Q Ƒ[c@0ЉWTnxɿDoz74+P T c`_}F˶"SdwT'~;2x%dپٓ}CVXmY pn~3NC۝{'7!>VV潠¼wb4WF3Mz`ֲzN6iG='EtOIΛPwC}О9Tpnt9rWpv.ⷄ`}yx?){=zӇppe* [2s$a%V`-0Hֻr5TKfE:A(9!H nTH獧9\3AqΙ֭87 #$ #ut4vCP q`:-RLp`Ӄع`VCrME4ZPБGc>5r 0BJ sX-Dpd> M+/i<\R"g{=b5僃rrIR+NdwuFOW&;Z|hW _`rf0cX~LQ}.~8Ly7[:5"%~`n \ _ENAhGAuM>w$!g}[m G[YI(yLDo`rPrpAc4PFNl{poL畕<3q+`y,Iv3˝$h-&1Y#uZ9rۘĒmf?A7 S]zW$T҇p1ފaN8[(?~zv+n%RvK@.*SR@VrVmK̃^ tyuP"q ܣZSiBNB.4?[oGp$ϧǙCݔX醩ATuuw>'gCmxr5uI}]H`gߚQnK;)`5`YMEN{ɑ:` k2G0ѰfE+s}gT2v ?g-a`OҥVpbX2\Y<[]XOY;Cz LLrt.PHx3jg>LLj9+Tk%iR;VW3hryOZn*~w,h`3/? i[,-=YB *[zM@-3$))d ڭcK{UIo@%)zYBOk ^P5oeM7WV~Bunm;F ƚr3 THK[ &uD%ԖhQjcݘaLf| ݟq冾{a?'wqHkOHK VCs6|^rζ$QK-9+R 3<Z(8PޫdX"`V(/GpTV!~ Q3HQmJ]*$TɂĔ#">Cmֽ5r GuZ ĕakb.Qҧ[Ӻ8KʣrLQ3"%pFpȺFF0(VAPnb?]/^ƢJhl"TA?{WƑJ_mc i Nn~ւ Kd9˓2,eL|~j@XݰvbEk٪F%Vָ|0ۊzAoI)8߫RND&FnG7^,^ڂ{H@@'9SK!5Hz#5C#@ SIVN#"=gw sʰ?%x)_bNRtDĩ9'5HC "A|A0˱n4[PHwD@4:&_afXTqLiEI$ -d9큸*opއ^K+kGXB=gYBD8k60{ϐ4 :xNtiO_A&ϵ ͔Co`G*X#fscU Usч:0[$F"f\ϝ^vxK5'Q^݌6W ~SvCChUOjuO/zU7de7.,o@i} 8v~{7at1sX+{%h}AuXRM{-RSHꘃ?t"de_DυFeM Vuzu@˫߿L?<~oa]ݛÎ p`\* "&| ի]rˮIT&]6G=fx~9ks+]TҘ[*oW7NQ_RY2.vM; p3? *+)27n)UڗK"otYʔ2 ^'2&ܾub}M"vHDa#f $z#T H (P_a"Xnu%07 @4G07)Î`fDe$8&0sDOAV:E ߕ7КL9PC7hm=4Mhi zzuWNtDgQN9GS3KɱTL>jS?dNu]@>-09_:GG1K} y<yؓ3tny$XV tLi\,RVt WM C*IR1qV{tHb` N5 -骉-X)qO6յVIsiA_  3?[lhß,Y|EGxJXZm [`A=ؠ^S)D*U'o˾>9w}]  |Lšp**WB*ՅP*oaܨ(rla;| ܇*Aɽ p R`B<UX-zg՘IDk1hFs+% YchxW}HXur陲 n}% \ NÂ+Q.ˊm,LPwnhz7o?mښ>K"^*|I`h}өՏ82_[]rz}^7wzn —9|yػ͍\rd~ \aka,bzFuR$=l&mTOV0#)6gF\ Dcd?yAѧ8c0Ewv)wVHi9${;|t4vΎktA .K<5&5/?JG)R*MHtrSosA#)6A;BVr_{okˬJS*mgf< >ex*׼oʚ LYs45AR鄡$?6n:$]?$UX2(EqVehj!uܦ0MO>&s0oox$xW|hQ0[#Ad3p+ia*]fpyLs͕Ex)V(_hɦ65Mj057'{vu~G3?h2fh*ѮhtH!|8lĢ(r2:7G }Lc;iU?0} Qjav(Iu+ٴ S]O=3zKS֯5XpD7IQޗ#f: =P*Lg,AzQW|/C7abLG_Y8?gcy f ;>%I5:5+\2_g&P͹g2 34eUc){K0OǺ&}zٱ7rNDfSnSG50ʮnsa6 ٦Aϊb()–BMSkISU6اet;0Z{I֏"Ho˲*K9Yn[>+X-Wcl*VѤS\|ML5*l(`Մ`es7Z-jEOs1ݾ/ayG8JG|nn*Tnؿ%'꒖JqTqv3ƍ)Dw.f5H30Qkuwus?%c}]FE6L1xdPSP!%Obb!+RUޒO!hP=zms'lP?E8@hi[ͱx3Wّq}!{H*r#՚TᵢMYک}U=Hz_̹`1 F/9v $g\Kr1~m!6*g4(q*0 3!d4PHg(,5QEQqRVwޞx4b?PdmsP윥ߞ7\Q$vfLc(IO.% s-X5^rÜ9Q [cI"!ɪyBt6Fr""/hG#^9 0P ZsMaǗK|D{˷;̙qw瞧$=ISNkn gg`]'^ÓM&+RW?ۊ`A ~),}0lI#v `َ$%eQ,K%T])}]V1|%p I|`A%iyjibqlf[DF@ޤ+hB ᡉΘ(8IxVt:y]56$hGF J<:H#Lj(+Ā]baK2oUWrE؄Am}5a*RK,<zqyֲFN!ISx1\G|ŕVu?>XII_ucRefxAS 4e ;Lig?llM }\'_AvVbEJv=JrCZu1#FiN-FBz˒.W}}OM,LZNa|sǠ'F2kJaBB<~v&n7 2'STvmK$^V^fK훬맭Mxs/*^iAPFY{&wzg0\/!%o}O.SFC4SlqLo7}f7U~OgM2H*j 7kkK+g֝y>p* WӅaЃ]ʫ9 j6u ׫抲ˏ ^Rت@+nDQ9x*6~+fFv<`f )ڿ9uwC C}L﫠m m:}DNOqLjI0CVPDDwWN<}6C[ O \P̨l2ض񹢌GDa9H1:%JT*G&T!9[醑[_&qL>Br+֭tv[3,}tr;|)tX B#NH.9[MO`f@_sb^ڍbA+(UR~RPg@qk|-t&k)%$Bc$! TV+I5>@NYl26y>\2VIw%{[ҽ%{[}ݖT5FTLa- h:ǘ)H@{j,NJC&~6?9%w; m]RC RY=ƭ?->3 L;,sPe6{̌0%SvZU0UwZSẋ0E ΢ʶ+T4jo{76U0{9* _+{z_g Dbn'n@V2ä0Er%,LsV!vIy!xMSXz{{Ar=H$ՁtfRseaf (!GS*Έ c4c䐥Qѵ11v 8r.G^%rp#G)r p{kpČ;?M?~jdsJSOSHU8AJjըըvRcPNfgY=7J^rSimf@!DZ* Te ^"A3"ʰӓ )H'5-ego>^}GD}!DX+mVci+掵OcOjLc6mĦ>%O4,EDi?Yø;)~Jkv[#>Kq?` s)^ W R܏~)G)p /šlYO=IvWƌ(9lE9!!8?|j]?OϣEB2>TY{fIrٙQsȫB5yA^~n eYE 4GdY&!9!BxF 6yhڼ_}G$oÛ>#/ m2h>3R1fw̘f .TL(pV=d ,ks` +Z4(=3!ϼgkU^34'8fm6] ABM|a܌ r,ˡ.L?Z_ˡex]C ៧t4v^.DxQ\>Tg|`x,~51RS ܈}BT_qۉ{U20z|tU˵TxT+Jţ%޴Jމ>|pLsKU0)ɀL51"QQG!zMl*@{Sx(|4q9~2"t/hf>㗤H\[I?n hev1H^VD4JeryZFs$J5XLi-!!#yqnʵXI׈X?./DWrJ{-AMU Jt ط*Z[{S=oC !K$Ӑ\`;7c`fvK#L2Mux z:6Rٽ8>+gnMMV5|@m?JR^Uoύ Ą09ɖD&(7(kDŭaa!%*wR؋]tFxY Ԍfg ǣ"e,sF2`F~_ghF GܡE dD<9ȸcP>O+22-($e3l&cAƐE"cuB>PԀ![˕C¶L,QƢ]I`8j)PCl8A$mt&eA2gXqH21Ї5-EIR-abu2@EԪa|YYVJKsxoUz\CEBӻJܣ+ၸӄ!?!s!(PLv +X}c % 1a1W{&P 1i!  ]-X Cc 5줬RĜaP#Ovc2&b_|ۆ콯5RC1M-Vǽ=Bä0e3=@ZW~$~3 z&{Txrh݋P6̸ehiIzHB97|B=&>!$f۲,1L ȎEhDL&1gns1Av&p!f$;Ɠb8q[M0tDz "!6/*k" .ߎ%VKeQ٨ʑaHRsJS(ylr5; s5<{t01#k*ۉ)r 1ќ@=e|hWjiF(޽~-6K04!nJI!@)D=pzIUQΡ" &΍GȌh%SˌuJ1i%̍9ʅ2ejRK r0ݍ<.N1qz|Ƥޕqd> 퉃I  w΃`OP*y.-m8Qf Q,Ab%Y`E- IsX^qեbƯ2$OnN~z_×)[zZv!|`5l |}(n mjcHjKt|&E:x.x<xqޛV`VE-6`MHy$aS1ɏp]\ZOԟ_j)trқ\\/Dj!D&)rc-n $ꡍ¿`8OϣӅ1M.>vy[xy59^x&F"\/*g o^vITT$īW0C'S*ѽnHk7wgvML%|\'$HI,EH$NnTHySLkf4(9ӺSgCQ{0rHRIpxaNpE;`E ; :#gSpw36WL&QD% y&"Q]`ĕa y-Dp<;ֺ˖|VgY9yt+Ѥ~Oe߿懿L}m~L_{5]I RlT HlԖĒYoS4Vxoc|91#NL h JD`rGh`;[88-E 4WNPGwc8K[yi4ie `CcA(,0X <"Nx3;1c܅q0)b\ݸMVegS+e@d0J)ydsPCf00id`}9O`ހ~#ievd*_#SCB@N0FJD ,fM PR #!P#L4aqG$!vi+ {Ay+`4­er]}0]Jl j1 ;`jk6#h6^i6%3j4!20g# (Syp+E"R#0Ge'݇idGKK(ԁ0x~ú?ޤ_i|'j{<8.YګXvZo$}vrNSNi ;] 6FaC0tMp`=>^Z$Fw8PF7?OHUʳH8ٵ in nRYLR50R×Mt_2J eڗRlZI JcF1a[Pq x \tȇQomɽ@ؘ"e–IhγXJ~P&C:)d瑭n)L XRi9A$^L[І8b yпU֋; 3}o U:jJ !A2)OABb1Vَ̝s@z6׀szpeoP ~]~~ڂOqK+8멛N5}(:5싵K嵏 C>m[.9d4?L&`9,P ~X QI>, `RԹ10G6l8Fk򣝇xz$+^Wr}ϖ]K`;F$"`" -a$<;[M-X}?{|6Ɋ)_ٓ@]r=л ;&󥹚7 ?q/?2nl;y-Nm 4wFm6 r'E2]L2'h8Ķt)-ՅԢמ *y!=,+ErQ\+ErQ>+e0\(W.?W.ʕr\(W.ʕr\(W.ʕr\(W.ʕr\(W.ʕr\(W.ʕr/q&(r\"W(r+P y ?W(r\" ŷCևR+QK=$VfHo'Jd|o3q78gcюe Cw:{9>c{^y-k4TsS)x0Asgv/t'&;LC[k fӯͲLaT9'|@aB "q(aD-=Ld4 0 a ̢4s]*nq])np0%sH[]F‰#ʠ?lpW{N`wuc!B\(wp)BSϘ8P=tE NVFjmWRc*o~V[)5bCؠy:I b݇qQ Le{ViͷG~Zn'lޞqCNϭH2{c{; 4?ZBf_^)d6SXmˡ-vl>e*K5GR|5$f٨|z{KJm q}":$ 1q94{Bri)0Jse9a\juQ1G EmёaYmȁJ))$t錜"  Rv8lBeWTJwSa|E,m YD5!a&Zj*Hygݦs2(:o;c@+2b=6тvl4BZ":V/;#g˴F6 )sdt]FpWFެ^&VUW[-׷7OEfRMOw^N6?&K&r#ij$EhsөՏj| d_~.zyzsj|Gg(׷Y7Od97Z^<|wԇ-Z^zr|Hon/Ƽ͜Vq;Ey[T=GFc>5gß=-hfg<}_f1P).Yn6"\*%& ILL*u(@'&J91LLTqD欂sl~I &uD%ԖhQjcI&&oU%#N[!-qo*9-^i-+M8`r6Hۛm6 0T tlsc̱r2t mI %ȗZrV?{WF X`v]R?xmOݘŸ,ئH5)ٖodxER*?R1YEd^kfE4 h@]s>Oe j`ՓjG X91U6@J3 3Ҝ=ɩ3 }h27 pf9_zIGYh"uqf I S[4"4_S*© QbBիA'gl$XP0HJN8Zzw<`\֐Iɭ$ ΝLuJI@݋(ÈJd|Is][y#8ŸZDpJ^\9Ohd:ՖټJIȵjkV) ;!qPרcTHƱ`c)U1H*TM(s]lt)]3E8r~fݟh._v:C m`;8zK%8 A1TB
  • wC*ܲ5H>%%lBQcupXU'MBR`dR\ĩ #F}ʰђiR_L0]3B tjOO-5X%L2! К"xP@Q&vDXev@.=MVc!'ru:twN*oV`o/\_Bi}Fjꢁc8IT5|Z AJS ĩv!ă2L,h5T"=&bjК{_'\OVVw>G|ĴEu%xX:Tqwq(0SFnRnVa|# 9nZ0FϳU򻕢#WMQn5y*A4rrYF2Hf#`N0,FB+3 ţc^ݘ*Xzݫm'oOіZ7ln w ^Țv:t\Ѡ[Ս;&-7o4v-G0,_{\3h>_I po=s4p$” 4A'ncz;c Dر+&oǣ [l;B:Q:i׶k0|Xbq5hN'59Nj<|!fN;Mލ/3/s3g,F7.]膉.D6pD 6;pFJM$lB%iv .tD$l ̞!.ueTUıLzqB&~ N'q!?f+}ѽhB%>! c"8/ _~ݏ~n84j"|҂\_8SheQߵ;僒[ mb-x)P doY|1_[nl3Ջ|7 [0o OV#y S|TJeY[[k$9I U$(H9wX# cC*wĦ^33# '&2NM0a&2 Md*i+D@6pB %t|5\~2L8zuTШר3h~hw/m??bĜ7,~@XY'߆5}̗?yo,~""S0Xp$ƥcM[S 19". ) y/ *2>r`OTg7?ڊǨ(D}QvZg}{v]xIqAx-Ny^ R8(Dtɀ1RL:Qh9%ŕ:UYhu{v]ɝXH/;[ P *xi@nThr+b,AX:_&8}&u`))iV\>Vdь?~lvHIQx9P;8}Y 7us(d R,'PB4Q-9$8wٚScRx>qˇ[Hȼ:>Sk{oyɹ@o +]*(%~ %7ZOHt61k HA[)&kKφF\Y[[*rDތݯBw7ӗ TF r3(r|GSgF2{'s#:է3FsbY9wף 女"1<scӾ-7k;$ju{_^C,n'v!U3@,jV9`L3GR? U̻xy/qMná_9+#g?dլUϪTz2 sűz2Z~[j㜞Von$oy-߾Fv !_˷>^R/ˏ?/wpVES;ϝbݯO=%;N֙Zԣ_c^u>rżc;:lq=9ݸDηi~&Z VF1DS{HTm*!)PoBB4b=RP2̹O &>i_ib}4&6{4&IiDRpDsXT2ZQ/HEJ0,6ζg6u^Qc85GX1)!ER/h"SgikEBRArSUa6q45 pOk l;nchgTzxxjG]5br\Y5bsՈ%q(dJRd"dr9LLwDT d"LA@ž-k 4VPLQ FR.4'ن=5܊Wx(Lv%0ɢ↙Zb8B`L41&||' Xc ZjzL@'dlV| @I 1jMhًDkQk#iu u8Ú5^%!M'&AE|sDh Gv-eAI9I UP.II%ԓ[)Tr68#vXIS|XE@}Кۿ{ֵ GbS.tǜf* /LS_Y 6@uFN*# ;|2$(:JG)2 N 12jc.3#~7"Y59X jN/tz S;p>"qȮ !R @0VEm 'AQq"?C'kG?R5h+=Zk,w: qD;w~e+M)E"Jj+񃒊[U&SA5 xS^H\kJ=Ho$d0Q6u5ڙ)Ouh~{,7.d:a9um?(p}N$&։-]EOn :::@ՎnaM$]6E%i ݤbMa)B$B 4h@Gz%qV+h92Ps 6C{&hIUr>PhȧJ#HQE& rtȑs:0 "dRiuo5ɡAthGcKnQ'艺yZ ^.UkinFїDIx?:b0SE2ɖgc&EXpD*D!3VϮ_Bku,[|ڿWLG(,SůΕ▱՝wLS*`+搮, VA={ b+| [{;OҬz~b؂VQR1qV{`:5RL#,8ZIVFEQa ܕ8b^RK0IQONbX4@"jB+M"Ԧ"e'a x$ )[X1Ð2b=6тul4BZ"2ن3e7|.ڒ5Vv86uӷejSu2[^7pUgwoyxS;S>ܨَ-ܿӼӮ'oOi m6qNL2wz}Pq1x1кբa:dzNWm:xvlB ZVvqwM=/)g/G'/7/gGyX{=+\iߴ.>^ڜ?Ϟ~HPyn7?o.D]Q}4`>( L3M%_4rʲn G%!w:jEbLJ)+ĈKxvKMJs漢_Dj87';<8P2a ӸQ5%eSc/Ӈlo{?[oc;|qC0Op\[&|qGKU|{`^Ey|Qvx.g4-GMv0u"D{ix4,(2?0k{YMEKp3"APDpC%K9 Q ƑI d S3}K]%i A Zt_M(7_-3~RiП]g]ڂ3Ġ[Z;D-˛Ejj'ҮARsC=e(RAfҶ9.AWvK嶎N [Dp}ftv1313#Q3MXaPCj"}-+G|3hG7"4QI=̷1.ҚNaW3sKwO"ʬYq8x3d8pyx8.'lD]Y_5e<ڒڲe{ב%L.#Cs7)يMP\݅ykrYڊ-*m|6jbcg2ײ^{46R-bfYh4=kv-֙,-4?հls.`9VͧOܽr؝mI WZrV9Afx1>ѵ0uZY^UŬ4Q[vԌ?FWfGbRɤR`hTr TQس+X0J`D$-h)SjH껬7_Dk)! FIb*Zm%Qa ƞP@]"G2;p/`ON.4zk|[U"pgxKS_?xl=EI#V :B\F:DGDн77 x#`wn3[כ.Z ~s6 Og}[72Wǻ_ IzfiFc~洨RbK*YW4^īi}^ Sm H<- qǢȸ')55$C,Gy+ .Z_i7!”F{:#hkg,ϹLNa$WS; k)F+/w̖OnIXq_ɝUOzsgM1UcZـh,KBY+o)} XVk`5IX5͗~A =g캗_}CU\˞qzNv4ŷUwYtwd=}czo4 X耬IZEa!#%4S),Sb8őo! bs!,FʩR ˽)-p!ڪߋ̏Spӫ:˺!Gk4#=F-&x#32x5sf"`ЊEtPvlU0j%m-1=9Tgfn>OAeFl^ݖl7WhٸF(&LLIGNߏK[pMX2* n`0Kb^󲘗h^HzZK5FHDQMH`xIZ`!RRռ-"f`~ R:Lo~ˠZ@ZgW .fVF\1kTakw4 dO  },87|Nӊ$?`:v e?4ҪʚP L,LJc$Nj/{7a`L_[btбu,b'CYu}r4[B/Ѵ )SߵkA_SG?^^W/19@Mjm>{oiQLo}?5iwCh@{01xQJwY dqB-U[Hb-r-A_ՙ.u}jt1)]lRRhM.g3@rg ~"`La>oB3{q~!YC?5n5V~: z^8/Il Nj*ų`U|㺏-)͏|DH 1U.v,>DߋxvWTA=t'V2=$ +L.WT`~y9_&wI~yj9| >3@ 0]NQL<3)5NF#?U]:5ƥ-f~>KY˗\enCH `9R/ Hxd!R&R/5eDDL ` Xy$RD%KlUߍ-g4,-҅F== om#R;{_MynPw|Y n n.7Grxn/|涝0 0w߃rr[ -h>D|ij'\]g]ڂ3Ġx9*x%˛Ejj'ҮARsC=%{AfҶ9.A50/%r[Geۅۦ-sEo/W_`Fg'H?sF}_qQNZwƳ>7M4>7ۭ3GReLiUxw:6xS,l!&j|EKu|aTZSi9b}Z'^YWFѥm?xm?yHYvgu׆TP.F+*S9Z`s;b1  B;wUu~Rq$s>ױ)xjtr#\ΙQ;v4Q^tbߔϦjIUG'$:s/7W O񦸴ިxuMⲻe2Zw;~t婩@5̦v}<ʸ:QU QN[s]ԯ Vz|>yph-#sd. {g5mv!HT YΎf+N:Hz=$.VCXւ"І8V )w3%hw39Br?$A]7W]țBs8 X 2yQէ-A韃?U  pO.+6E}.VJHJgXPW@|3\5pU1p/qtۚLGN̺{NMHk4DD(0GЋ@J-RgRml{hT畕"S@)!7$^D .$t7(w+ D%IAX{ȡΓJl'J497]v& ~d\|tىt;+6[ 恚M4W_ry5'IL"%c̕YRsmB;sUqZF$ V+JٳrRϏ!Jh>* |oJ&(Ase%g$]k ?p+50VĖBr9_]‚ms[|8v%lFj͖zng p浭BXދl@e? щ:qWO~Rw^e%gթ]_m\!x8Ov A^@[}/x>2My1 DL NKcBLfz&mb&IiOMT=a~X0!b{z& єXH-($DH,n*StXs HB|Ve<ͨyG9yCw:-(R+%ŕFR]hJ5vA{ m0zV<[, O@BT̀IJI!rhilAܘFjٵ kiE+Y~`Phdl!ÁEЮJ 6l2eEhV{앭pigH(ZRT(!x`{ZO"F]XhAq"11.ue( CDzsYNӰ:!\9Kj٨5]N_!}jܽrq iRrȔ%ڔ:$*b͆"9P*Ԣ:᭶&H]ҹV*̢P%Ol^Mu ^igr3Ӻg{+_pf)p=~E>0Jza!toaz_a[O=d =@ا}FXQW]iE]el5*ciDbD.T.Y0 肟} +h,Y/eͯ,ՀD9N#JJь,ELitU2ׯW^ ᗙ.AMA|B-j^J/ n"Δkd(qg3}z7$笋Co-RALnsP||\A`1DhG砨h{0I%H`|D;T#SlqqGՑujFաNuRܚ~=_?Po'G22p>G~ Rwv^Ӗ9mD`8poF~ℬCb5u``-,21 mauż)Gsɡ*N'꺹j$WPGoH,}~^UɯףX SFvh19ciP y˃Jz.ϑBut7~|wz>=Cq~B ^P3 ; чA/䎠4|кYO\cIl˭Ax̏8_wy"Lh_nVJHJgX_|3ATeW㯋 W֯tѿ?mMĪN݃{$I4DFsLDH ݝr#Ey| )Y3n׶ =zaJ|}JFx*"蹍<P4(:xA1dz @[cT4='aN#:8ܑ}h@Cc̓-󶝃v91ڀ'=qۮj|j>4am6h~M?ڦmӏG&㒱=j6lo#}je_5X ^5~WWGm6h~M?ڦmӏGm6h~=Ki)s0O.'vh]D?k+\ŅaiѨgvH/XݘiyJӤEV/7xX$%ER*-V232+I!E.eǕɉ"XãB1*$`OW\%FZAJRQkTf=#@DޤM+MYFa z] z5,ɄFgn'&AŘ&&&%h U4ޡH*Eh(+Xi)x(%VA\ |Rz+Jd|+i:2V2Q+4v-XTXcN3e h#b iM`TYgD/ H<%m# m"J6Q$L G~k!ziژ\TSFggKcGDc8Nk ް;p,Y`3uh8j:k9v nc7%{ 6 %!R-Vt ׄ0:z.|@JJMP \{<@H)Q P)1ZjVZZM\Nyn8n{.vrm#\:;#7 !Vg+އ]u~n~k2!3WAVo~vG Zeݶ>g%k֧(ˤ'"}\Y!3?/7rEPzNgʆԇɱg>l![U02ѸyBIW~{:+cǹrt+1QDP2Vk۸# RTdg< %s i B%BPƹIBgP Ix9FksN_"V>Z@*&ׂʲE%Q'eTlZÎoRYo16z|ɤŦQְ֑ [;a !V'YP+,"·Y5Zj EѦiqE#0ڀ3q] 26mzb1J&ijb (w[ EX)QMyk1ٰ58[ԌykȠ폆UЕ~pݾBn[ل񠜾ճ.>b™H_=_y_תjlO솩'Rx7jN7w|g[r*n[?u. S6I GWϣq lhuOG8`Ӯ;G3? 9+CN^\]eyQY_ k~_٣󗐠]:thcucJd4]{ǀ9= bTmBp`2Qf$S~o9On&XJQg0oV4P\Rf IU }2R5HUK %o[ AS aW\O]eh=vveWo]TtWϦ+;gyA?׏?u^:Vz`V@@&B .^hG J%ɻ\Bw2:5O&!h#BdDTiԜQ+!L0ߓ㒠L!+狿cnkBIdt`Jp| JBZD .Å \L=\.QUo V?a>~|W}smؙ`Qygg%n/LY2$sfpF59T #vvCήrf2~ur Ybl$Z@J8j=!HtBy*\rnsI$eƠدgg&DOWqכ LNL!цT O0b~~zÚݯњ͉~媌_uɺUTzjF*涌AdqNhg0NlW&UsxR8xn7׈AvtO|??xϿ}Ǐ.)ӗ_~DzmHp?O`0G>L/YqaXdmcIФL]$Q9̛ #U#0QݢP JI[R0e۬%"GE=8"jm٠eAZpY6:`4P-4*XӶle!2cJPvDdzYeL]͜z>r4}YsYegQ*e+8s l(*x!8 JAvg=R[yxJ| %miE5b=gDčpt =^R]}HҀ Dʻ\|v2db1Jq+`d҂|!jZZ>+)) OXf%fepgmTF1)ϒ`N0>.nٙٞHjq#I[.Sl窳[} cmGCVmk[s:9ve]jZ#-F5 jp"N5QM'sb7ݾt+Gk@kl.{/~ PgcL& ]6w &kW-]%A5CdRX8Q s*0%.! H򙪉.ET \FÔ:,ʌ[zTjV3gwT\v&ԈDr6U4DfQ,eJ k9-tfB; z7Ox]4Ss Oc'7?:=ɳ`ur[oGMٿm_ٕ{nߓl>s-ԒM~T Q}~"Cz3Nʖi-nu0wP){v䝇 1۟]fg^9e~.ALIth? [)7e9'sMI Iw8PE fJX;ֺ\=.mP4S|*5|I|ivSy+~tqvIcd!3!-Ũeg@ZSi@)@4zP`QKΞd8A#Z p'd>Z)nw(tU[aA 5kpTTA3A*M3WT }5zM\EX/^vzM?^Ҍ.a?ha8+I|4o8U6׹v/OOE_8g5w2ߔ£ͬyA/pX|3_I?=[B4>2rs WA*Wѐv}:.k?+߳K{+O@=T̽wI}kj}ۃ@W/J) ZڞtKvyO_a5*e ^b2 bk-YըNzJ&M)$uT K4ZǍ Jd R(1pE.\!S,Tt4R[fւK Rp%wz=b<;`.>T͜TzW|BvsF{g*l~yOmǖ]\97َG;1)L0g"'*T8L+5C?6`oJYJ\FoaxvF^}}7x'q7Ʌ-\J+OG3=~R->%Gp8ˡ^BwzN;_ML'S'4ikq'Q4E`Jwj˩U)tlC][Lxͅ`eZSj߭evMw&9)ShU) 11O2Nt}+z3CLKl!_M=K\U^5*ot8jΥٰJ i \9(ȓQ8"9=-%:11 1`tR49 Q1&e R#c5sG(,7ʱupIKYnȞdew~o#J^&C0(%+],poI2)Dl(ȀP?9sKJ^AF1FPlB;s({>de퇃\^JԮ6:.%j.jwFRi 0yE 0AO|2ҀnXz|KRu>c0JLlVH>` Ʉi-fJ>ֲ}yZiӮÎebg7qV:ҙJB)Zy*=? ܌ˏ~UN vET 5 ) f|L.#fʌ[HD5ejVU检!?nN-i/]v3Z'gsy%:B,ߔ sUܶW*Oy8,PjxW[ZZx_J/8 /O kc.TZ5IE_UϫthaPԸq8VZ86克(Υ9eb2mXR3[eقss&\Nvx!c6<^pIq%uaǪrs)[uG?^b̨/>?vO2)x(c81 " C\b8!o?ˋӾ#g0 ..S> gtz=re/0*:m*F5Uz /Dj!D3SCgZlh['SBF&?gKcmb Rz2x>߅$žG1H F,'&xńZZ B(ф2 vʴƶ[2//)Y%07 @4GP7)Î`fDe$8&0sDCFuNʟ5,*vtTUuG;I|.G4;Q_z0俴WK_=0;;\z`9{U6roلyPEVmB(*GXYe\b8V!nsKωp=,M?OYΝɯݕ/{]bR+JqW*|&YcI~)]ۦY Jv^{ Rk䈵_Jx ~)dTǷk~k5>qvfLWQ=3n}qM_/+40_] }9]d&./*?_J#_}+m-~~7%;:_RTv#W/?ۉZ~:5F%/Tϔ DKf>*%rfNǯtP|uQ9ė.Εp!JL`*d5ytJ&y6-d½N_b J"X]*Tɝ_\/jz7RwƷbT {" }hIepF*`N:~^= 5£iVTqx4TŚ^Sd.jOX̵Q*g0 V0mȳ").w󒧯pI)H*5"hBRcIR{T&}BXT%Q"F[cR;ĬW逰 ߜ@Rz\TknlV"-t2֤I^YM47!cDK/6#Zk|qs:dԘ 0moNBq88w $)>M<~4QZF``frS RYU]2Xb)p "JFc&=j@0!|6_h2Ǣ:TirW DNDN5,8JkV3-ߦ;-#WY(jcPk<9޼e0u.}Ky咟+nq_qՀU@y,![PH8+jMv7DI׽,=9xzN未wIv˦V؄i$*8ʝX%BF=cF@1gz4‚S?X]z Ufۇc\K)oC2/8/ס\4CV,˖ \>ͿW)j4$Sx=-QmlXoUұruՕ-Y:!!3ϗ7\g0cT6rOmѓ{zGz|_*礄uSan{Krm 8m :$ qp7|~M[)(DIl,WJ"%F&j9a\juͦ9cB)7j<h#"(ڐR.k%'RHDc##g$t$O)o| )Ў[8²cB#AL4VwUF2cI62LT25Y1TP.Iz~q'*_Z{BYuhԙ'rVg^'yVy>h>c JTv'!y &cD%<@rFϵ,wJÃ&~>J7ܰ *<T$Cx|#tO.fa]*v+m!Ց VXArj~7o 3 2;WHCsKFDoAsBrq+Ga1: I1VLEM@^4*bLinfcO ]rXH^PB4DR%nŜ(a$iͦ-ŏ?@?yxAgD96 +2*cz~~MG؝{t9?뇩JVo:ihK^x~kE99^3G`q8`]ɗ+):UL HB9U tD˽)װb9Y4\ K$mÇvPksAɒZ婵l=!*kNjcko4|1ًKg4݇X7c gN$S2`)kLHڅh2#@'DLdʍ~>Á۽syeh9nwЇ/[n[mfϪ=߯ZEwo-Ye77JS\}{&@;68E5˧{lsh]ӶKRLAKrο+2_AffYWn핁UAqaPxfLe :U9O'10g̹ӅEwvC〦8ZwRg(N*z+Zf$x+wâ[hek.IƄZCh w"drT)&(vI(%!kOٷY*}T6uV]/ٽEh#R[^^םA%whB C.Κ0_ ; $p]ؐжyj"=}W\[j[ '=k l+Ŵ|LK {X+>F\I:{[{=P`S=2W(07*U_UVv\e)5+[B W}C盿9\!cpC37F?n&bhDCk7DfYAl@ м>4gm6ofjG?ގrX0?Zf<[WOø\kkMCmF <܅/,k-2)| >||)B.D OL3Mb"׻cNE5}1zc# aN^([B{lOk;Y(GrkE4 8Ay 箣U"g%"gqUo",6](%Z)y'SD^ nP\CX_U󮛫,\}@sF)'skss*Kx5W\ B}wf<+_n?_u5_ HM4(o\$юpJFO7+y|"@`@D.P)Ic68%PqfBNI_< DVO$EA/a|>lM~CӯvfSz.Q*-7>^C2_y Lr =N\) LqjA9++| 'M!/__} e$ ɏyܹǻ1{QG="\ ^H(A2{8NEIQ jx\t1wFeD&R"@7TzfC,&gS8m E9h!&Dq'>)fp'R#)xN9 $Nkhj$jE~@X4g(XN]mтKt<띝I\]4zqyY"Hz chwɼ![rXtWi&]|H EY)QS]'ѵs`in$LOsιxc]9Su{NzO]ڳ;E[56氊&[gEᒽsz]σKPZyen};ηtV˯=G+A\Cjs>\ry}sob pճȭ¹.[Y85H?dQg 6>_kE] WozMjG#QTG#E&5.'ZZFb<0⫕)+yaIVe=]x7Q 9N,zJ=~G%xnNOW)JeWbh[}U0R껪Rz_-(A +54rgN*7F+BSK{ww.Њ2^yyfimR+Q= xT/F!NRs7 qtkc\p92c3&zTy\M AM [JS*Eb23AQ\荒!)'6B;@(M BPƹIB81 L$F<9/SRJ /&5G͚Pz7d;^O%ҋ}/^yq|Ty%sD<2v006vO#q_r%,x#ΩK\,2P𧓞80rtHrwVWGf0Xp$rcXde1s1RWUVd]o7|loLۻts:GR}*ŕ֢7"FXJkV6`vXDZ,TkϼCxO,`ؗ15sL6Û㇥dlg}kq +xa-Q 3hăzt id8&9+ePuvCS;eҊB`'}}3im{XԴq8x -XxɘX+u2ND X.*ŤpeSpݖv6seqKL 8Й23tefҦ۪%s7"{2_)^Qbٺk}_ԫ8k|{$(H9w X#cCj@y?"=}Mz WҲ#ur)\І\&Θ 6BH࡙MD3ݠZg$Y(ךY +*zkKq'eYnҪJҪ =صH~[7 iXfzYM1VXY>5A;ɂ$a:thLO;J|2FX%>>wv]r,z<C31)F5 I"R'\T x4啊hoSL[)$ 4D:υF(18(8Gj"0ɥ  P]Ozd_vWeIA׊3X޲5n/ʶNU1?CJG-e}y:ߧSZU :! $A-@ =arQ.uRMs &v6Cuy]z$>Ӵ7>@}t'WᾸ1JT*-tzz29TԊqFI_MkA"WoaL `Tz7bƥxr1+Wݯ7Ս׷Y`X{|bsqY-};$*z޻Qtoڹ{jjI5򖮛!p`և^OЃ*u'=|hsx9[%hulj]RM[&~̑0o"|~޿U*gfJՕuzun.a8 /g?%o{s瘨󳿞|V?AKavM"n%[鿟ۼ eޤi&M6zY飕[7]r D=sg&$IN&XĝOa_Wrq**(Or׾R!|J1éiT|¿ө;~}M*>HDa#f $z#< bB-+!hBe;eŽ WV!y&IO%07 H'-G)>Iv3˝$h-&1Y#7uZ9(ijN)mL{Vjsvy#y]=Lw^:2-U9~iOK1;y:< !cIا@KŽwGOU?_Iȫp~.\[ L޹`\}e9T`~xv[i'1IIgqE%@`JV68Ъ<9qkWցHG?cbBҒ&T-q8ei4qýH@nk:-ח_UKwQW~::0yCMyv(޿-z.ͨ)qkl*jJ$GxDM[WxpLm 2Aq$ V1X L9 ӋTK)e<: ARMt,1,B`rgTHVhiPsu6CQ{0rHRIpxaNLc1>"x"mYoNA7nfXsMƇIF J<:H#LQ&!j%W9@d K1xr833SụHI=)\Fkid]TV##pL+o UhkoYW RYU4.-ȔF"\B6xјI&rI)=4cQ%6TizW DNDN5,8Jkjgc4޲{kE*rgP޽#펰s:O/\**AQLT6J!^ڠLiL )&^*K^JJDRi9+K"ڋDa@1 zb^hYjQ!}'T56uΆIw|\"l6r@:nX"4:5tͲ4. qoڸKXlM i7+s (Zˈ ]L#Ü4hgv3Nc;~,#G¤qT0`x%y%iWS=0oCB8P^p( Tȑ0b`cD2"!R 5HԔ1тFI#Jeؚ8ָfr@qa^}x.y|inGuKnǫιY)2qra H+4%+,P>b""ơ iڵh6ܘ|VSyM{f7{C=A=o ).6ޒR[8z9RZJ3y@7 <2:1Rz/Q;BX88-EFi,'Km|9&r6ȹM[:2,"1 99Z)"V_"D4݌ЇcE8|/֞4:w}w=/ia]##C=ALSFEQb ̹asZ`鸣*[T[#hDdՄWDjM"ڃPf~LG eZD^ˈiD Ƽ ihnM j=F!e)mgݛp2!Ÿ?|ߠ钾p$٪d5UWͫߌ:bm,Jц}w$muթ+5<1ோ$mN%?.ҫ T:-\]ݛUPxn^}TC j)z<ȷ&wWo>&cf~tMPoμМY˫$=yIw"g|x݉/Xk*Nqrˮ=)}ۼeM?7)/݅oHye=.dK"ø9pWF:;l#Ӄ30rۄA;)XXӘ h)0uT oIG]aFI4{BЪJ9ō8`P-iU"bP- ƽuH ÌМ-魉+B@g3Ч特0> 3WUl:'0UʻSǫD#L0*ӝ%AwpaUC"/^6LJJ-Ѣǒ a}s1w<3?O}4-׷F3F4aҴ){;8vb֞@= 1/ Ƙchpm1T{A;2@JF/tJÃ& еp|귋Zx 2Z :oQC?w1հfH'H% zgj,Z n[R!rFAg U`hi)1H-h3%_ǚcu.AbV6D 'hTŘ =&vn3b"{&CDZA1%.DpCLDl,,xGI4 vflkf8XBsI{w k2Tk^2y!o>Q}3=G5G`X2`(Xi3 &,5QEQ0ѽ7"OV&}'&ߦZ=Y>g PCs|=7RQ=t_ dl(RdKJYUU Y5 Uf[5b -($8% 7^OiSV gB\r>_DθCO@ É`]o'CΞk2m7yXtL'L* 1)NSNUFY'3$5AZX,`RD|0ay# T)-VJ\'%).)K( $4[أsYԋ,,.k-zB87J8P{:Og#VK6SR@4E¬3IA3X)^ k=nU3 %ZkJH1\0>w@A ƜJmؚ85j]x.$-BuNu]moG+>Z )yT5%`K$QxD lَQ $D,>ED^y="I&)!RQ&5AK\p^CQRţ\[E%-NUkZi״ְ(E|'2/o'ɎHfbF$|W_6AM ď%L2!H[H_iMh<(J (Y?v1̴˥^)7NF#WGhry֭Nj#Tgί`e0,!'r7b0ǃ@$SB9d0 ,Z#ՁHObdO rn1eͩo> ϑҜ7vu MOŸg\vQaְכ^o:7˓ TA7,!W;Tn\V-l]DXɾ[eͿ{|F3{xIJ5EΏXRR<8zj-?ޘZWm'oۄzіZ6vwn[N7o4E{=qH_-W{6n - Ψ9O}RqcM f:ܨ{f"5wjZV~CЪ3lvSަ2[,oxr;nXwS*mYg_ԭ7aQᔷ㤀o;ˎ R8(Ae6܉˥cRtl >%|y|t&LqifҦcqȓ1O,{uujP{Ʃ~(8pȃ"3+d Y&F:|ngg;m¡דfӴ|rw#Ot<[>` Svj 욟q:?8NWWAٿLq{K"ekqdz W/#ˤ\۱:Wp%zw)\+;Rv*pep\wW.  \ \eq<juB)!=\}pŤx(p[}k,ގ//|un`@gۯ i4e]} Ru[ifP·EI3;nev @z'n0ւah/Ǔ|g_E.q ǧ/cfV/R0[X;gc!ۼ> onoMΎj\6|*ʹ?>q'AD*ιYR<P#3vz(@=_MSaYZn&z6 4$R'W(0}*p҂J)hW#\<eıyQiqao?:^<0]j >$L@! `y,M1@r1&0g@F,e!!`  LnV"fk5bnׅ~~$;-^~b-qC)P`4OfCqa-`[J+v}C)%~> DO htxrZ&5Vh+TA .UQ,5rp=aҩ…t9nݵtdUpzYssS wx,*nIJ0F$q*.)MaAG!€5`L$,P9 *Õq`DP)P8ac`bPrt;zo&|%)Vms\v,n>^^ ;X[UØzX,d,iH<1 *TAzn"7mS,A-+ZEuHL(Em6]0Nj"0  PXWcԯ)H)є8rV@g qRz+Jj|+ѨG·*-l$%1Tl}2]X-XT')U1tK@4*Y &Έ -)֫zaTssc]ұMt$MD)׆ *L$8 2p[ Kcd \Fg^:cq2?Ai];}cX]VV>atU ܁hr.'e O/TB;Ȯ+!.穀7++"@պ rQIET壞w3:>z/o_ŝMZG Y-5rX*[]a[uaޑAʻ&xG`Ru9: U1q.lE w>lh`K po)NxEZsE7J2KyQ$2΅HPLIl#O92%tUblJ"BWK1ѯ8h¡gUN.E?_.72|H(][^,Qz^^jp_ߡi1$+j婄}tO/{zUz)"IiNoYɓ,Ga,AN2+J/U6F)D;MM.x뜡(+EB0*Q!;F8e1qv^`3=Oo&6޺9n7ugY{wpWl[S'}fl[nvݻ>ݚ?6vrhmP'vnl&Bν~ҳ(Sxg,uϭEpg[:]lnhb [VjݽMUwdݡ絖C}noywwYq-COtD>8𗇵ސe4lk]R~h1\_Cfls.>|29|Y\v2U³>K)c>{)8@>3yq(0n &=D ]E}|Ew |2/m3v_<0,JS-*6iL]M%?> .QalY7`"{&CD{N ^JJ8%Zb%0lBodUA;',_i &&Ղw'h(ݭ:ePˆ/F3{s%5ANYzx`Zu,_wL "{uv+{ԮaM"W{";]ظU t'+[Ͳ]؄h/Qt cf ;&r[w޺L;BΧJ̪I-40UANy7S)RL<=Y=t u" tңMWŹæ˔ (_;SގN:X r B8|Pj (53TX"d QEFb镙G&!I ԕlzRna۪G=s_ORÙH!y?jHG5Rv~~1)~hޏC7; !z'=IQN)93Z'd~͹Mַ?ۆK>Vomr|cpiuDzP0vY4 Y@SJx@Uޱky0PO}`ur; Uv@#_x$}< l풃go +]4ܼO>VY}/݅؞Eo|-rG[=mt=z%W=ҚM.. &c(v `mI 9Zr;Afx UR"pQVaUU+r5#pT+!#A*6)'k?@9`sY04\`D$/h{( Z*QV 5XsLA02HRSj׆( u(`f0"N&NVUT );NUJ䵇9=F|?\MhDQ9U`@n3!d 4Pi3 "(Pt蕷ۗswRаݾ}q5{-YoJ,'S3LO$=܌/aјvQVrÜ9Q [cI"!tl,iTQmƑu@=uˆCw,Zm+ ṣژ!! $sL(&<[֑aYqGREbD huFh Bbnxu|/uh-`;<a RCbwIG&Sf \&bMbOާӣWG3{}Cx2M"cs_M@?^LvQn]nG%)WЋ2(Pzlp` =+5rj< J u]mɡu`: ^]mC%kTt<+:Ȟ)+ 00,ˠm$tF{r4f'ӚԴ vyU࿎kI߉ZK!k^J11yJ1'9SސfUzs2;I}h^Vg[ ݚZofH`hO m'NSg'eZ9L{L!L@L*œ!ap) V9dmpf/#_Qf+dZE,viއ0ERLݱ™}>Ϋ*a UiC(XKd֐ nP{Wߌ߿KD>m'bSCˆ-i{z_aH-VTصj].shX 6 \8M`TamvmLUlvDbV-H1gpG%o"Q6_^̋@j)C'{:k4vIs7ևO'gW}φ[cbM6U!U,mB9?&S^}1 $flcM]urInTL)_6gnx~ w/=Ew};LԻg߃80.]j"ȭIu+(߂]erˮET6].G]1Eu9~oVfvQ6{y>t"iTQǯ&D/dBYWTc*.=ۂ*w ۴3H3>("p_iMEڻj _ms/"HDa#f $z#< bB-(!hBe;eŶ_mXؼP?iaa<=D!q&e,w( fQzhɦ^L&6JKLw~F7کnZ5_Ici C+NG.TLs踥9n]ɳ->|qDq˯Xi]Odp쀨{v6a< ehDb73OZ:O.].Dy qY ayΠFxExB6Jvu͆{.䀎?DHa;*BhM&ZKkQJ' *Hu{hHtZ7z ͤL{_% yEmQX*7$,;Ox')hWW\!4qỉOgӿQqL9i[_" wًRIo`O>cIj[ Rd+m% yml V֜~z(ACn6UvנVKFC^^tm#Tmjm{yeq*9;Yy78![nu '1beP]=-Ys;-c1xdPPUW'{?fBلw:…/xnz2AD=W5 @U>K) [R~ҜgP=&YNY~TajByo*onuZHQ/i0sG_ /)mz*ވb)KܾS+jZ~UqN:\cT8[!5J}T;no)+IsBw}ӥD4 2t0K][~i2QIu4.MjM`;"#^:iS$+oc%\[XZU-kY7lYGhLgMUԘV<6#C4wd]͸bnbe N'yLq =A @1B]*CӽNc aS3+x9nw Tm^EX7W@8 w*vG]pt%\[tRč_ZC:j'f9*Edgs\]6M|َ|O1޶gs= +7ԦCsCe*gJG_G^SbޏZg!~T.E{**Ms!DcK;{nZD% T.Ւ\Ɨrvi%yu~=[S/& AtM'n`rkRT $ۛ[ $i cr0f:KU&j+7_NLUL{63co'6gƩwqnU(訲䴸1W\q0 Z\h8QT]M}"iٷQ"V yz17yLs͕Ex)V([cc*C x& VZZa@["qSЃ*7S MYfgI_Cdr^3Ϧig#R1(I"Y~_.#1v |r[_uC'G$ ߖ[csu^̸Sg^jUހ#Ϗcꢵ(r2:7G9"(W>||~ېc;)!-W'`dUhv'y qH,e:M J rH0,DDꥦ0L*q<8xqJivjKim>}ʫ'6NΧˊ.CvqΜE^\.]J;uH" cDH+4҈J(1XbR`&,vNcF1I]PDk29M1M鉲"wDQ,@eB% yYWxDoqۮa:NyI(54>o>~pwEoPm4ZM5J A(Qmjj=CɓήP EV[0OĸU*J93h8S@9ہ&:͆6Y4݃TNV StQ6a\ȡ0Cd6iږ9JN*=dӑZh$Q*'1z4QeIov} ՞VYxc]k󪠂S+9 -R9b>ip#[No/cQO$v$XZ F,FI#avD2vB2VfԲo%l zj~3he{4m!}엵VD+#IAE}PEZJXO}JZD"v!an9e^;'e{-(-bK:Ea"MYBMflwW [yUSM8A(z vg#jE8>>^ŷzsخq[əs46GGi<2ǣ?[o)ؘtBCoձ&;u(2̀ H“^JMɊ\D3X#med1;bSaK!):HQIcBJT@bLs9 _pVflG;*ۀ ']ݐh!Bg5 q & aPOL};}h/ʄȏ uS+Io<4&KHA:i'k`SASKȪ 4)AHtF{eM猦('\prN)H2IJl$K`tAacxٌPf3$Z["܌4fxY1Nom/0ә&&֠얩Vxos.Dn$mriHNw+?7`&w30]Hds| WY_軫x8VB];&tA)ebMnpi;R'iTd3E4{tZ(L)±/t$+LQ#AiIMt*|$xdZL(]&,U @!B=x$DNKc'9;D)9谟n|y3XMW>^GTk50un%zaK|1^=^U;X0>6ލfw1uAflYyiϛI׋e-ɬ-1@}_~^r9j7 L:xYJ2t)T`뀄Dիu /!c\շHm]'/a:/g2JS-Zt:/x\?&[[_6/TwW|߫/%Xiw`k[dM둬~d@ٟh=bAl6w?<{9nJYw/7p񉥗UDϗm9 } ?@#A=ZA/r_/ {|R(˿e2i@xCq$ 3*)ZJ0&"W#M)0 +^ʤұ馜06.IQ$)Y`B/"Hj*}Suc7~9?vYx{8n=)s3轺j t&a3e }!| űg"cAO Mw>Iiz_ŅG޾@/o9h=|]=Ώx]}n$kAR@#m|nœyC w|P$U‘ꏬ791J:pF֯F?;(ʢsjr'K|=RuأCu959^rZ.Yvf$=mxe'rf|}>'?%Rm@XU j%:Wk e1ж$KJ,b6i[^E3αL [WlEvWS(Jr;<^\!!|" K b^YwHmo,jgjQ,A`VYA>*PfPBQaNQ&T3%\*Z*V=9 |\)mfbP{) JflF6f.TutAuዒ#?kU&&q^yt~49 ؘNT1Y"Rd|N(dTe71ZHV#F3NeOJVUmj! $(:BvP@,X95v%Tv38= M/@&B[ڀF5GZ)هCTA wP"o!32^tٲ,"2pdI"Sa3réS٪T4b3W#QqЈ8>CE8%C4˒QI*`&FBHdJN Ll@K*hNTz&d(.FXXS[WhEqי_u`8Y;f\^4" zqЋ8>zYx%&$^:FIZZQ؋RX ĒOE6}!6ևf?}xְYYMQqC8!Y*<ڕ`Ϧv%4ԮZw +˅ڿMCm( 4޾}/UJ񜚔OAv 9h8c眐]`3-O?;U`t 7ܜ8[ mdU@;ݡ[~0;4(#h맨vcJ rr9߃dA0<`xփk獡A&^KһZfmDɤ7a;ڶqU^O}-wJZne |t^qV!(4wS$X~N> ]c{K헣?.q.bm|Iu/B{h> ܷLcn;jLflf< 7Z SwB%c]hN/Z>$lxk(EM o{\x9Tʘ$G@2h`a&Lk m<ʗ4R90Ï8׊7ν\[t긝O 2䬠õ;O7jK|'O0u:櫫Hq: X*ZmQ!S8HO:> ² k礌Br$(3VElAo1%.! . 4;aO-|F'y|оhHHHS}̝1Qiة6CWTVh:&o9@׌8MMe5xV j+ZGX~ GRRYAMsH K^U:y/tR(Vqbj?kL(SR(lq* ՜t=;` mޖ#A z[dǹtH D #B6efC.P6%W ((Jq=_!j㔱VR"ipL;&#SCbHdI$R뺕1OY~u)Rr;63:\c-*^k;ޥdy-/f?>MKX3kQF0ڢ-1t9MYΰԥLk䐒e! |.~LjchZThLQYZK딅hu<G`"9+eȞ!6U, 8e(fܫ<γh~_a݅Uh2!(堖%Vң2f~(SaשX'q2MYU(!4vm! ',Nem,m(IVV@6xI) >FQޏnЖpfSU) J1zʘ6m-*[ը:1凋<7Q Q^>}krٻr1YZU?.8/Yuxy3H/-ny~܌lH Ŋ#Y&)|1 r'\YZO&{2y`KQ7ס%R> ϵQةZCkmP,0;a+@覙-fѴ`0EF\Iy.ë%[גkK K<؈?$ux̠4S-Ǒ)]ܻriJ45Z˳Gk59Eة6v 83|ٵq=JkUA[ӼtoV1gO~h|~z1*0-\Oa/fc+خOy+~@i[K%[lk6#y{3 .ژF?Q,h8u!omӻ[]tնfV'k4,qu4^QgfyRnWpN:?=N?:z?.?|凷߽~qzp4bjEy/C[MO7 l[4- Ҵߡ] ]vl!r$onOB po,?tk"#U0+(6.YU4RTR1*\F[1Lz4 <6 . _]|'?Ǹ$-%Ief`sTyҞq)(edr ö'?mX<,/tGhkҞ̵\E\}2ri>kPJNi^eV+rB5'*>l_])W!NH]:uUĵTJk:vuUӯG]_=2_%=Z]O\'QWR#CWJWuW,ҜBA*>)K]i>vuUgmwjzh|*ޫBcVX:,^vKsEO_^7MgE#_waGK΋kpa6R]`L }Zo 3;K3s0oμKD)ޖ6k ʰWO.eқ_]qۋ5WWd̈_SŇkfnfpBhlɠ Pu*hH+ر"%Ȋ&!NH]?uUTU֘cWWEJkzJRJB%?".*Ҫ?A)WMzFJnY, GϏٖlVAsl`&ʎ .DBI͖)% >QAE5_/ɵ_|  4 NKWZGJ+ҹ9{ʘ}JX޸`E70hq8):1vI  a8"4 Y%&ڼ8j8t&OւZob |!@!׆7od<ٷ<`@\\R8Eք5!eMHYRք5!eMHY~Մ5F& ) )kBʚ& )kBʚ& )kBʚ& )kBʚ& )kBʚ& )kBʚ-!gB{/;r{>9=M%z߱3.2ɬ&QIHFs H zud;K(JYxQI[xW'6kRQO)'ΥGD-kY҂Ίd`m;lFBsuQ9m , S2Jp8=5$ ' %fj4P𬲳J(2脯*5J vtRל1gJ$pzCHDx@<B;2@%8H02i?8g6/p';Qo6|b! eػgF|5 jcU;Zi'&Fz@Zrߩ c$1KoI{De^'S >w$#e]ub=$jw$K.)Қ I*,K1ΈFYKZEYBЛ"C{"d-S†z/Ұ7re{BLM^o8ȍ^:|7}Dq{/ =Oh[^EʶIkyDClZ,N5\N8*[a[Ww 'Ρ^NΚR|5%|<|"i698}sxkLFΐ+j(8RCAΊc^5ME͜RZ+>(M"d\邰.EŴm30!+9>Ɛr4w|ؙ8ۏQC5 }L>&{D;K&x5>Ic.I4ⴞ!BAL-vdr4&qv#53q&+œt8fLPP*X%B Ia%p/wJdx y梱< /|I q^Bs?}F9pJNST"wL;g0^>&#˻BZύ[ V Vc3/V^ST?\~0:%Za:(-@LūR٣yǾ;dfv N4%]^!'"L8tE_7DZ~];)?ZR~/i8wz`0>:'v-Ͷ"C!-m8TE`IO&UhWSIZG*TSC9BaP͌p\J jEPD0I4%` w2YNVup7m.)#Vu53}?Nl (4HR6~J>;R)KHN/#QdeP&_?luf%Ҧ:cL%F%as+1pE.\):`P!F)ӈr y\j+gR g̅#c;gWbt}v>Qm|o8&.*||Dd+8Y5@RN\eEBѩx;Ys\0MdyگTS]bט|[́ OpTjZfτK E&j XE1x9z[Y'b5 = iYF&E`>q: =>gF =/ o\SL{D حsS }Q٭]d3jH;'8H/͛`>>ZJimA>&+;C@ 3AH D&OTpH+5癩 G"P`@Q0D6LzE>*@LHKBvREL*Q5SO>&u$v\N,^؟z(\Yq22e o9mMƵ긓( FS.5%K3e ^ T[NL 6$%+x`!E$S^F55KgBM#,E|#eѓ̓5&ɥi+*sy-ye?IQNvY[L|fò*q+(.p\,,%ekm#GEo6,CNvv;/;ؗ ^mmdcIYȄ Ɋ"i5(&(c2v>W#g>le VQǢǾQTֈ׈F)K5*䤘E2#UZyK  A܄SD $QJy.zG &hLdIsiPMXy-'8]&,?&_g5.W/z{M48a"(ځV=˙Ԁ$}MQzzXa5WP*VuyUuMAp}D?&я/&+R' AED à50L))=(<<3 q*& f49;+ +Bsy5vӊك˗Hhy R@o9lO륄d싨 k$?eJu + *'gD ]f)b9R#zH˾3wz6w<<%Hs`G{ߏ6}+(^fMrQ{ݼ"( 'J @.<ٓ\ȜS&&L(ʌ /P+S&TcP80dh<⍷.l:z]1Eꈹg)}k[+RBz;U[r["^5ث*-U*4}A-l͜{3 'b07hJbt-eWy͠ݛw򨓅I;|Nr GeSd^RKh%e:Ike\GxpYӊd;M”B25sj8ǐ%q !K[-hY9;^eXl_׽5P(Vyệ(UWTxڱ,w+^rd(#֢M]rF 8-q2JZ Ŗ$I]L6($HB:}ЄdPe~V0P1pZP1 l,`$ɑ $%dj({< E{iv7ǃº {Th(snQa9*x) XhĈL;CII= Ooơښv$Zf%8 QA'=ϬsyM믒IH0@%E/HuiTtV{an9 D&]%~Qⅶ8JqGTE<~*å\mϊ\W.y7[ =r&ʑ;(NiO[N rw[tZy78J~N;9t&_Gq ?U?-N,<C@S 'L+!@7 ?9H^mϖRD+ƒw:0_[ ˬȍ74o=B}*ƳB>O\Lqh ҵJm.۟-kлwH]ᓰS+rW'Vq0|jWk'KݫO0GqpD~m]u?}zv:0{Hf\oq s+ܮ9gV;J?#c]#P>>E0sɻYg_IhcOZD3Xff91ن#sT6|ɮQpji9MK)KbNBz<(jN`0{5{ة"缳xxĎ"uO~n>}~=8@I \?<!A݇^gM-w|ń0Ě1CƮmaXm q4epF{P]owMpu= Mb~1?NaEK_S%p[br3]n|Оwe'S9=;Us6Gw"557e4d9z{](a^<l4r$hgpΥ'ά-'ƥEgH)ڪ/lC`iW$-,ǔaMg$d1ˠ2Fd4 7'/vn<+Q*e+$36r l(&@FiK:' zQo!H/)#)iY%%mY{߹[lte/>=7ۯX&fA3m,XIJ FUtH0,&PUU sG61pO9/xK$1E HVzd;o|e@5rv(pO! |M!bI?Spǩ[Ms\ůZNNx .ں9!4х ;,Pނ{G.&&5|Q{FݤAl]#3d&=<:PaXv܅a\/V>r] ]bޒ=} 5rў0:RtYDTE+< $+,U7f2W܊̖ Bl|QVsQb|U7{jQqRHk,EL)D(3ȔI2Ch9ֵhrV% uHW8 8=dٔ+@\% UՎEv ~g}r|5r|w_!Nòæh vEoYei8 rQhQZ[fl1#6g:``ʜHM f/m2*:|;tC]f):klyXpXPKkR2D'R("c"΄F(ZEOue>+vYj b}d!4yfԮl_iO t26Mr7?ǟǓ㎫EvA<丧^=[Ǥn't;6HAfyy^9S0U,m`z,ա^OO)Orj]*X$.} mm08UN:b 2as5[j EȦADan5 'gr\4g/I Q8B(O3?uJ.%%37x6yoh")F3eOƼ+l W#gǰnBƳ6kǕñk^l 9<]ZG;wp?f:KM!vٶb/-]/vݻ\>p9g;$|NmsץfzBdߐI}ņZ7Ժ}Һ=MxU-u~lRˆZ6 nݽM;_+;r3?ȷO7޾.O::<|/l#'ÄDd&l7ר;j_E;ոƕ fņn+1Hs7GNV[L^&O~nPxKyҬ{ αH?{~)Rw;̤>@1}k(o -/zoN?`?*ՈČf#xf aeW pU5ဪjAT~+'%+Ƞs-1W/9^KR*=9=smx;1jm} هD?>▣N`5ESV)lrF} EǀP]cN5gKs]?;?Ֆl$HeǪ9ePY7lAKoo\}+EFR+i/+&M6oi!k?F>o*ĝ_%hxǦ+l|5J.Xd IoQy<4^kٌ9G H3Y@O+87Grڻ D$uld&~dUa4x*XH+>+^򁍜8/wołM:>0b]S3L(R3E7TlֆP7α!jʖE~0T-C`c aVb#Ag%~ΟNPP{v jW{ 㦐Q1ő2ڸXb8-xZO*=QB slw>Š#B3$A]`MBŃ8Aʵj ُ۠2k 4x*"dDWDO[w z&Jn-Z(fɎ&7R2C S> 5曧MKA$m%p $e5k8H3p!uNӒ⢛⊋i|ITDYp$KuJH1P(2*M.J/?ףwFw~{s[9ܱ٥q0ÙR&f1'l..x^ijC =&%4P5"+V%d* AD"v.DӔO'CB{@HɗmXDcF/oIJ#r5wDĠ*T19J)\ kIB"!OnYlQo`w4vB.xP#Qnj"յt=o_o+#Ǜzc9')Xa6\z*1Uۈxbqay.$|O/Mj`3 Cz)bG9/aE~~9?0r9^R5b{D1Y|jGec*]M&LZ[o+ς[H70+=2ұy{WнȴƭNnMzܤt '#a^[۱|w\|wzgG_vv7+-|fv5i [__}ԇGFE1 {.tI8v~ţug.FG3//|w:b= KF1iR)ź-Rީ63V^3Xy5o\^xXuuWpV4QYO_(5o7S~:eVO^~:tkX~J1%c ہ+U L_-?av[r6p:J%rcͮ?~w $zH ܻJ렳Xm?oyDAע߾+|}N?gs^\q{kWK؊ߏtʘ|l#ߏ/ /0{b1ǍWZrNkAprq7/P'uױ0O(^T#@Q]HJyOؖ;/ &`u uaHT hu@;~[!8m^MOe`<ʫ+3JMd0#.TQ\xͪ<`tOE@Qb0!]081(fU#k:֙@b0eoN1i[C9H߬)psp[G3`v*y"rtI HM%rg]D@v(0 @H%4$=Y.) ЃVk;U2j6QAVH>5_y$LyRaxr%rƷU07}kM^Y)aVPW(8(p!K<e1\Peuma(EІvt4[cFeX-DGjuRj:*pT hYu|"%0b" Uc?Ad`0Ϸ}eK*:^_}]xSI)H2\ AdiC ',@a\,Ej|`(JO`JH0KXSbW5ZY|rU({]j] eC1PiVT<# CO]@DI\F 4Ϋ\Ye]]CdRcT͢ԫB2*u-ngL(H6ɕ#y.,k07㏍KR K$=mv2  Ǯ6jrdϪ[|J9!%,YT@5Z-] TzE.`[+6rBgL:ŧ7swiv`\QDPd< 'QA#3Fh <|+`>PB*P (T@ PB*P (T@ PB*P (T@ PB*P (T@ PB*P (T@ PB*P w tP:jnQnl7 wHP J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@zJ LVt@'h:z%mz{!WZ6l06?]M(^1y3JAsvǙqPۇR_`x2f: @W(lr=,t_L}ANü'ˏȼ/%$鋾x QnAD\W&([ibM'pOg/kPkٗ3Ѯ?3Fb1Rxp'h"(IEOjd%6rF~\*Y&,DF ^^IP8ZZi80Q*LTa UD&0Q*LTa UD&0Q*LTa UD&0Q*LTa UD&0Q*LTa UD&0Q*LTa UW8SRaBZ }tQz*LqTaGZEhT?V-@K= TK󾗯>_AtgFf~= jt^L!,"@F9oD-Ŭ41Y)"![OBժR1]nW{'Gu)Fstt%d kd-a f"w3*)N"zbcX#G.BWY\pgյr"& K/ =}ZGe+̀lS}{C,̈́&Ǻ# A0s_O vw>􄶱^L.,漗F}Jˤo6pq{SLЎ3 R,3VTʅ*9He[B+U8G8 x<+}^Nv|cðB%=;x2Wb3Ξ;t<[Gd:A R(#鋾h@0UɲjIf#AG  LH0߹~/O׶hzu0<u;|:ގh}{@^"tx<'>ki9 U zLM"I"$S1]J`QV:ʴY#J}P<6uosoOLe0L/W@iEl d,Yg.e:i~9!>mrw'/foAq`na0*wW?mѾX8sogM\ϏY'[7kqU,,3p%y|s)C;givϩ_Ϻsw⻃tn/r+}z/v€R1^:Md~Mtun2O;!IH譇vwOp#wZJ巻>?[?sE%ĺW6z>y塇i1/ϕTk/Q 4]sCn:\֝xsnUšEO.Ej@ ɆH5$@:*ۄ^f_ryP/Ha1W6Yn ^}-*0'd`Ud jΝdrzV5J!XI^E&I1a++ /|KUAK %B@w8 \JMg;W'ix_ y6-۽-MJӴlL[X'[$D `VҚvQ_ݲQz1% ;RxD 3Jh-*3:})RWd4=]FV! 3QwpԔQ=S)BWTnnfRm5P*%lbDf/H(7TB(kΜbBc֜QO`@jl ng<:jQ,hf3tlZ,BǔɅ& 1+#Z%))mP%2Y8" 'NG&}趧vtK=N Xm;u c&oK|Sz9$EpU:@蓤50Ә%vE$- d.0aq7]~=M ^j-to^XΙzݥ׳{ ÑljQ^F Vi Nh?4H b5zV{gY"B撛IU%0J '<%\ByImk߯,TٽzJ).R{ Ó Bׁ>@PdlӅ6MuXTgD&&HT@+taDD(Du#V3˔T*;4%mcAyW R6T۶;FSO>&<vY9-`4X'a,&hȴ;%. /g UϽV;iMuP[](vS`I4bV bo){Y2 1\(a-R0zSVbҡxM=a|־TҴ+ӹzʌ-[֏m/yuObY Tg4s\{Y7IN,Kem]EX̨))hR*qH=9V/ 9R73/mHݽHo!8 ؋!5MjEʉU_DIDR#i#pfό*qac/B\{.|T.qvQ, ;[杛^Nâ,;~CFxbVdXQ39 R%*x%OS1)<)Eh6=0"#uCq\&6AW5ب3q#vpP6Giz7)83㚋nL:гvo% y '+fхu @:NF3w*+ulJSÆ',CE4qM$-$`"Ñ8>p>fY :itχS*FlL>eD0#{FKS6sԨlHnpdt<',,Ȍ-dƣ! Mh8cp$ n\D Z$yҀZ81M&l /N ۴٘ˋa^Ğ{^K@6CC qcׁ[5ڳ$A{BŶacұ/PLJ@a Vu6WU >`J"[ݙ$Wpٕ$mO"[PjȾ$T`ZťՉzJ9W hgmVBT2*ԜaJIAH1ڼ1ڼTwN|ԟrd4z씲OPŵ}|L:KI W^ 96pLJ[{"ߢU:ʨ-k6 xKq4ʌ) n|L˱AeVCHΈMd* 2 /r4g˘sBE韗Ԧ(=13%̏(.ﻻX!9h4tOO_+˞3=&B:gRd<0i<\d0+ŀޗ.dÒgYob i)G[vWziuG.Λw?}(ЍZjDўt~xj~"繈8 L M rwzNx$[]bO|x Fq_d2I5Y07uYKnp<@ZKjptEќk҈HhFu pc-P$'i 1e*UL%`I2w+pQ 7ɲ5-Ir=qh%M]LgDVkbԘƒwT2:0]__[Ka8Wl'ExVs>)*t98s5wzy5 >TJ[#Aao> ի_U3; )V !> ~qn8 %BrZ Yd~ UDW誠5tUPZU'nE']w UAiMOW/w `m:CW]++h;]zjn9 $<;]f .>nh~ i3hw+վC]`3tU ] 8|@Hap=BEpNh [Q∉37l~k,#++GI~ y+o(ifM3VeAf ᦧHr}~1aL8ɟ$yKcqe2N,clН'0L`q?2&DŽ⸴e.cdhQ5ZdY"f!_vAkZO5Jۓ $WKRFΒ`:Zж?WCWz`L\;t\;E,t#J.t5}+(x<"JS?Ÿr4߻~\Nn|Sj˽^7=nn+=fZӖ]+żILu@G*tR }Ffn(؂+.cT̼>3 HhR̤ 0mIʐyTҊSENdB/Q>s-5jv_uMY/x& ocFc)p; $Y`]$Y,l0૭YrԲg<@%YՒlѶ<QSdMXUt*8ΥS7QNrqFlEy>dF?˫GoGzv&9g87p֪In{Y f|`~9v.5^58Ѹs9 p~6G$Eu6UHg(GzrENΕd ?u/<#S͟-֒DqV%oXOHt6#Pf .nr)nqsyߖc 1̽(8oWQ"M}׼F .NW?8UQ}*M.5ub$k5};Zgo>Krgc}Cw0p~'̥4[fvH4O?^g%>>o98Ւn-]v5C;3.ǓF?Q̻d<] t}r89dRE'Zꫦoizk$7,py2m|68AVp˦=TJl+?'TG?_~{{O8qS85\$O M뒽 ^>\>Mд;?\nbןǷ&W#z`GݮxWV0 ' ?`) |S/H7>hN肦.pҽjo/`_lGhbGj7$xFDw_/ITC F8&"Mɩ@r@#Ȣ<~ )YCg܂׶7af|>&B#]2?<XݖZ]oë-ߙ׍:Q)! XJ$bn2w+1*= PȡNgWN߻>PlNȞf~ͯݧ-<ٝ=zc;h]WVNHtvc5B3({(E;}Yp46:uklG_`Q!#;̽\<̲>Fw1|4]:Y5QCd}d<.\LeQ'w$pfgHs{7@\lTֆjx l6%<|Y Rpa1. 6a؄]Q 3 g~}$h¡NY`%ߌSuWT2ݩ:K w %(nygv|^X#aPwTVZ0ww,Z{K,.~Op<&Q1CMR$.)M" íE-娞$,b: *^DtDs)P8HPg7G]X&=NR/ƆILHF&\E|3.=31-Ж{y-es}Ar<+%p|&Hb4J2r>ig2C*[49ڿ޷kA &tGuhU6M"ΈfH5$DLzW(e^&Hr e2`OﭧZq[7wںњO^F,nz=y^hƺת꺚̀ BG]E;7:BY|nK/pwڣ;t`d\(9ǩRTXSYV=oBLK6#!'rP8)KAiCuG e7 &kd4 uo5!!K:GQR%'D)qv8 XdLgov}}v%B$[xE,_Y*By6Sn9`s5PO0w;nI(=:gzJrrfCaJwFɕ0:z&|BI ,pn DL!NF9( )1Z:c3-&S{"$CGl2 wqsaCq&ΟFЗҟ{}<VC抭Sc'HaR7{ZwN` ǫW >[ M1Vq8 -ʞ9ɕIb[#+cC(17O9W=g'[o>`BͳhF1XžFӣiYhrb=l{JH 6U}\r=fӫQ&`sUVP%x嵦VDC=2@׽۞.*БVj':zG4|1ձ brDQ&)E/}y)qv/x>mȢK7ut"dl܉Jx^^5n^G-\{SZyY=I#2R%r筧! ǭ $DM­xPT3JR9eC'K@drtJܪHfMJkb֌J1]Xlf;z䆳htꔲP'Qp9dSsg(Z"g62. "(Cfq` %p)@=&N 2#@GNk1bt9lޔywz}neӛ%2K܉gC?$xds#,79 w: Fp.g!A(#I!47Df*d!W k*PN H^+I\ =OE>]p]B!\6/CeL_(;=[pfr ^'yDbYHp)ł$R E%ETh9Rg{Di" d=4BO< y\R!R\wD.R%5CrԬV>etQW#|c8UOWZR*b:( dE9%2^R䞂Ҥ~[~7,]jѰf!CB=0gY/Wwy.x0]Jΰm xO Ie ˡB'M#CLcR0yN$\jjgϞ 5?N?_OJCwm_.iGo;Ps:;HJ{-{mzU,m=@OACC^fGԣzܻr6i4k^ԯh߿YL'dS5]7Mhzm_;7Hɧ|x55gY{eNF\No.('qI}eD6Ϯjɩ,e%{U=8hKOϛ:TEOsbm=և{f>wۜI`:~YUZ7U@3l-UixYٻ=7Z뷫]<{=liY~o8֜]hrs_?S_7CL&HM.ȣMdBs$0NI-(g˥N YYb17xn4Β 1!V58!e"r w"drT)l[me@/@y\|v&-mqiicU7CG@QďI7 J ޵&Juyah+fÕpi&[-S:h) %dK+RHq. ^*KiHn*RіCO/p|g [{-\=OZ'%n}\zJ~R+z6pp.p»WYJ*zQ\,.WYZ!WYʭ=\}Cpsohky^M媶@>~ز(!򏸁'__~σdlmppVAYGn>2sGee[ieQb8Գ}0&_7co8ҺZqLQµ7 x2_ |诣+|_aGc(o%L9T`~7a͟a특J^+j1±_owI✻ !UϦNすSÕ8#2gC& *u2e'B&1"d2 \eql*KH*KUW \ &8'2  &:˲`uήJI5[+gWUYOXFv6/ȅFu~cϷڔA|\^0556|ލ'O?j%bDNVp\@۾y_瓓8BA0BԓH"9 *nPފ'_ 9wkP|*?xG .p^hȃZ:lZKˈ+t$\U@ jD*FR.49] (HlG]+AD.[YsTWI1ɢ↙D6KG &ưT akxB "*$,P ɜxNZe2 bEP RphbPs L߱eI4 ̑|~\-]CҦ6,d,iH<1 *Pϑp:1YTni\(E sd-Ht y$ ƃjǹ2D&L +Me_)Y)pXRI$V } XE@͇V M 561TldZ#BiNScN3uhU6M*8%몞U> XX&:P'N *L$8 2p[ KcdG2:ztE;yk):QBC~Vc ꃀg?3́AYC\/Z@2V Em+'AQq~,(17z-OVRxE6W2%%\AT6( I.XjBZ5 H M$]T_d0Q:u EԡWDO{)+Wr\y7.3k'nOXFzhe^tps#΀3rbDq9,-|4zoщI DW>Yy^{y^*qo hs9^ B[l%P!b @͝{e7_WOr;QWt(3WT%|8!X뉔,9DTtQEyAJԢ=0"1q*ue80(3Y`jDzMVta欯pzץ\/8&@Eca+7PϋeVi+fT:1q iR3aT:0 ʚ"9P*"VXV:WK3@bLx-.-UJ+Il2Y͵]_1ᆡ5tG aod;1!|2SHZߕ y 7򪀂)R-,"T'r™*12̗6ȇE@7$f>.8~mNirm:nk~xK@%(at\`RxFLqM'dZCx 5iحbS6q:nAn2 5ʻ8M|f: l7[B+\ Ɨ\c~4˱e|3܎5d{^b޾DƭUFnlyddӕ{㽾.ȔA|ڹ^ĆEng\Ng׶}bQ=@T/d_\2G_KMcq y .srD!&=|"T>G5}ZZG&w@ :ɊZaQ!|˞^v^"EHk[5s8+yr%7E6hwɜ\[wE饊(Ebh18o3e(RF%S69d'c0,&NK?%6bv7 +7:Dwz^fN,k;_ZC-:ܸ>n,ٺLW쁮Wp'Eݧ_Zk!6XVS*)|9&Wmղu:ݴ9P ~ͺCl+Wھ}QwroV!4mdE{/z'ĽىOzOtfjݴm~s!6 i'jǟ-ͭH7go+t!nymu\J>hh圴*xBҀ6tyVl5PV;)BSLE0.'ZZFb<Vҋ"+q tMgWݾ+.IzfnMS鏞]5U.xiXR4|R4_AqIS4|)OHӶs(+*xfw"mu4hNyVMô{4=%fihwR=l{C7`ZTBHZ̦ MPUEV'2J@fVDtҵKqj'ӪVUYϣ9 3Q [iOވR) tƫA|]*]i"uqf+mF_{,^av2LY^L">q:Hn 20{>gF{^ ޸l)d)c_o(de7>$;dnzpJ 4 dFM{L9.CnmqۜWvmX?#C+YDL8r+5~J=:;;(Jn`(0 ȆA)^1孏 Q&$s!;)r"&͔icT_d*ws87.?ՃC,$l9jik2N2,LglJf&d0G-gVyPن$o _19 $$ )!gf&ü-vVt[O8/]Rv~ ڏIg)I8 $'Vrk֧rQFmYU8r/R(L@QfLY1/ *;D.3!Gk>.T~UoJ]Bb0J?#R"Eq5n{ѐ!ijQ A;b{w{^;DS IIh24)ȭw’G!%wQj#‹ BCfpt΀!h( YaN A`J5E2$lZ|jG%BasJ?ɒ';tpfJK%)aRЄ91>ǨyTLI%'T!"CAt_&%.Nb=#ϗ<>9FS9- 2bIHZ[FhHxXI׸ly¸:s8o{ mrEbB0 E@C1mLUGkЃִ`WeiZ4GC1bQFQ۞? ,&[Aw/y~o!{Q3J`%= RrZ8&u 2aI>goy.2B`yqѫd KYYob HZ=6hcaP9p77 Dt2KgϮ4!u(\^܎]flWB(\ݢA܍܅tBC³hhwAh~+!7 ]^" b:ؤ"B:ص4Z~4q̙47M#&៷5MӴhT3iƭ0eM8}ּi^qǩdYlBčO7/< -U(y=ahB*ufa}(NӼ sSh͇\E5:[:zu鸁eѷGjqy!qܑX]NFz(L7.t6^pt6F?<; 昗{ҕFv,-3AWIJ x?ݔw9cABL"cw=^x{t<&8-OCz2i]0ܾ\Sꯤ]>14Bk\Fg2No4؟jr8=X~yQ8-]EpO r+Υ9eb2m!Hs2l 9F=Z"!Wtqyޙ"mƾn. 4J=KiI*xGU޷igurRhR/o"I]jy)U@VQd QA'=ϬsyM9'mQrZԮ"=:)G6國 ޅ`".omB[28JqGTEʥmsZN5&..~ϛP!a  O͇G*{UVKa!s8X8zi<ϺΡ89)`ipˑ,cxr0E1c|`~AxvƶZ~(9X.0Kԁzksnuvr|T$(M.@WPk)o)4] ] "}rg͏*HK=("8~Y%' ڌ Qxbx9bGu?-]f4Cf>d^!`2xY@|ro4kd)YHFf26ɴ'ZEjkŖ#￑*g۩jӳYSkґBX|72< yP`c0.%~ΤzCNH&~YL6, >Ϗ9zGG;Οi.RM᠉Ob@g0p; |&i"- Bڴ߁6]Ʈ(Z4whJ.ֻ&yģbpA\` J;rns%pk1Sq>/2ᱞ`s/1|tϗ$)n hB%w)=pyQ>PȜ:m~ڰ¼V?tBG]ϕLo@*2|n  gN 0(/uZTUk2h+~M, z{04{)Z/^[^ B^f/|WJp)p8v}[zD-⏀+S_=E{Wa /W`nW\X nx-jyt8x֎Zf8L ѯ/?`6X " $C$ܒ#Zi.ibW*[0M\W ؍"Ƭ~탹D?}}܄aqY}@ 8X3qf]+#=b(muO#ÝyQWBr'C% `~:K-=W<>^JcO6j *+,VVK1d9:Q/\eJl奷%<(T${c*v7wT,bM¥MůTDi g[~ص \nAw pWZ=+bxpUŽ/)ju"._W_\OL^ZFl^ĒSuuLhTn**D*cT1)%Ňw; =C-(!akmH_#!xm'Y ,`SbD ,_̐$8"2匃2gXS몚*dǷm-UH7?oW3* ]fp4Xɢ~uk/*@&_e CO>1ӫ"V^gz9Mۺ0ގ`z |J.'ъ`)㱲گRߕ Mv|8n;/ֱWoӺ=[DIm_.#ff31ݩA1qQ:Xm9"}AC6|6zvRɭEOPLqR]pgR]n.1d擋]v>h)Ǝu ; Se-X,n '|T/@ [uʷ[NQ,A%SF?j5ה`\cTJmKϽ(ozˆ^FcD2Y+냉QFDD b &R Td36g`%~6uSWTg;1x^eu^zu?^| v TZʍQ!P;C Qq2!91K#^!r#=0! 2jh42**grPSR*Tr#}7,t&B|}-2]84Ax06z/%(H#CT3d5s:<3}=#׆<6BB$у#΁@J%WU32܋y O2׍'?[v?ck="T*0(/i@"qx0#Up<T:afбlZNH+eԄپu[粗rxR榇$BQNEێY(=?ƷjBcڠ{'ft>L&-5I):igp9hc} ^UuV^G Fxna]0֍Ϣ= :lkH'RwYTUHwaXJ OSϧUSh\n#Tܻ?bNFF\'Ȫ6 u"FZ)-)6wnlݦ-tNty]z! d./%&f|unkwA巃8^Yy|*aq)ëc"úwC.4G*1Soh&s~ jhgg' i'+!B?.]A$c#xɜ0knd]Ѷiu]kJN4rb=Eߢ˻1Px'P7[o *)j+v=:CÒh,4Bv&R4Һ!*ןRV!m$萢#(3hxRܿc?9=)2Uޔp雿5h?4G|Ch-QU\{3umRnyt!W[:mndTd8e6 bp寮 ńae-&H(mYGg7>b) QGԶ9!R F7*bj-ņIk85b-Ka⯦#0_Sa"{{rֻ}ܾOl<@%$+Z۷G:!̨n.+ )f ]'?G; ^RuI0'-m?G:1H%z\wp>e4i ~po>>7g~HMi:wZp KV9p FQ0j *X0B`D$4r|z0\?}'wmfpc1: ؟b*Zm%Qa ƞP@wy2g)R,y9oӦS7oo|\17ш$BsL !F:@a*R[,(zJ76נs7lfSr-u<_& N7XwBk>e`dfIzŒa]¬` ?x sZD)Z0la% ,ӽWWڬi^V]vjZg߫z !XXWsG1QSCB >DkRp$(PM<[֑aYqG kZ 7܆Ql6! Axs7fhZl~M/d0Et1\V;WGtҳ>~Xψ 1-l@PQne Ϟl)˦d}7 IBP+ MG#9Ak+zj1.NcRY C AHRVa S띱Vc&ye4zl5`jnDd6 cG!LE?¢; t<#: =aDlVtÑ)TeSy޲J[%M٭' 29r͛b]kIn,UQN n?R\ܬM( GD "aGab0il͔k/pJGʵ>o7Ld>Z7 ()&=)P6飶ěṲFV8uĜ  g32Uaa6 If,=b:Y-o|E}vM} 3O?G+Gl, %5c8:479ŽXL4B"!cټo# !9{D%$ؤB:`^e@.*ۈ\ٌv:+.桠v68&=jmZ0cA)81' ABFhUh.Z ,Cv4HZ# 0c}9p4&3ffiUc$> e? EY(̆wYvMz$&/P:ITJp07N _@v:%p1%] QݗKy{XVKZVI`X#5C#@ ݶ2pFDz=& \1S9,u+¥&L/\xveu2LGmgNlͥLf_:Rd(d ZaK-/7[,n58 ȝϝ@x7?ug;~ T.UY7 ށ/g9h|+gv[n@~zߎJ2oY7? (\ '~_a_MVF5$Q}wT\`(rZ^H}Pj`Wbқs6Qh`~N {1S`@H0b91 -&"| 1M(`Ll{hJ}8dD c)RnwDb#aG0I޵,#_ajzqm<2d:jfћLW/fbOeReWD$(Y|X)BT/q/D96b u꜔}9L.e&EܣOd;QޘkB/RoHvhZ+oEũk^ZڥPT&ܩԽ.5rZp+􏹩榪&̷Uw旔o>e.Gz$\18P>9+K^m{c;_[GO#s~O;x~7X!(Er/p}>?vZaM ?nQytYh;X(1S^B ͲJZ,?/"3ڕQjRT,*%B Ja( 5 7b'UBC.ʀ<"j0QS.@!JR^-MےM3栂S LpRTN`Չ5V\Uq.I ec$CdG$@h}!+^a>?N9wM<.QC ij  P jH $jIv"Q+YkE: LX*R2ЧMUA:QAL.md^;'eBBzYt+. tIAZ}pQFV X-pOr\ޭQ"GիMjwn:Rd\{tt9˼{t(,&O, [O gj؊Ұ'a uF$ZR (l˲B2c T"h5% TyPWBUv)jugϘuq턀|A܍wW/N|ݍV 7476#St}DJP8hgWP+ )ixRj]\TPA ;1 m}H ;ML&) mw0zvȊ/l@U ўUN^O⬟`__^ޔM!kov}lO z~E,_ɳD8XܝGzv9ɫR³~di8KYK*;d e(hh( 6`$9t c>o)7N_ 㿄ջ#{l"5I:xYJ2h؋r@BD6"|P/}|q<?ϣ2H&4!*_-&= - )+?F/yn7MCߦ T|1??í<@Wj,6A{҃LS?THzř;y70b8'QC=ڦ(&"rŢBGYc.qYwvB,)嘼 ֶ̆rHc_{F = %;{a;I38lY48|Cxa=Bov[!0yӉtUNy]g|̓΂4|sx9myYo~>rΩcCZdP>MIuXgI^/\oO.ns9HTGDscS=b#FƩ)xKȪ 4)AHtAvb+ٜ}rl$މHR!JRJd%I_' A\ͺg=6 03~+m09շ\lh\V]^wn}䋡vv'%0nr֗W7{ǯtymNn2~sžwf'zִv[zh:7kt2rrm;7}ef;>jIn}qy8yǨHZXyҚs^'Gֱ~q` iLB$"8,|9K *VBC 0 A4(Ć&"CEUണ*R\TN(2bE٠$]j *yE"+,EБ"PV*a *I:zdlZ0R.)M wBBxP.*ͺ?.RDQZ!yIvyws(/UU}l/t_%6}* t Jwe,֕źXWb]˥+ueRXW=b]Y+ue,֕ź{j+ufӮ,֕źXWb]Y:B)4te+){%eWb]Y+ue,֕źXW=Kþ[d=zw+lk/uiy(m}6`\chq$<ɬ%Ht)+uPQ95 ͟=__~ciJj;nOdz;u%<׿}8A$ͭ|ȣe6lO5d6 uLL=D2i/<#PQ`@r *员(L\=¬fd([T "B(\ _H{KNqU\ 1i3W{oIB>?p [^|MCW>}Z~2/Yg]q9$\'k -;y7iN.fO(fO"k r$z]AL0B28Ǹu@cK :e+s\u}JPϳۂ)(̾x:ZLD( Ub"%WxzQ8CltW0Z\ ,)A&H+PXJ٫(c*9v`p5r^z2~~T1L̓:d仴yFΆlhGYomjw3 zQ֢8Y0YJd!- xkHl)$ZFlI j,TTlzr6u#9R̝1K ֖YwilUf3㹶P5ƒL:7+2t$}%/`d|J#[lf'2Y"Qʄ*22ii%1@Ƕ]:U5h HVQm`렀2z8b]ӈVm4b͎Sڪ[`x2xqKp "J*}8cdcT;(^zv_ m` ?Cfd(訳e[SՙEmC*Dd:[ߚ+YwvùS_&ǵvr:>l~<"Qw-VT$X04d,3),dMpJ61l̸"MFВR' T;RFZ]I+[ Ķf\h՝L`\=K]($ՠ Ao.Z!^"P1R Lk5g:`g8C0ĥw+CWW&$r(׆/>JqQ`ʇOMSt p6}އq?>w_u=3c{SFi|2#xjJ ևT-F~%t:E׍_'X;<fƅ[8po6xSa8 SMD "&OE?b.|ryYdF&@ 15| K?V5U5QmQ4AԼ{%F&ii˛ZLCĐEWu(!13_{P746E6Ua9ԹŠd .Onzc0 Ÿe%N06M5oRj棒oTZkxӊ ?] wğ$!d5D1u_Ɉ#|1b٥zi cw:GBm3jhG1p6$WH$w3c¡V8x v866*/PKWQPTEWF? g9fZ3t2ևNOLRulְ@ըm `0Ʒ baMTyg K [(ۯ C#+!PPӬ,>̯Y|Avϟ2Rq55iFYjFb] .u 1\M^N'@/OSLɓ5gCs:afwyyuPZ(ڜ-rm|̓5 #ɾ`= YGWkYH6YUځy=uENak U cJ#‘ sVk/;^sD} *?6f: \EiYȖ 6#_l)6dI2Y},9KgP>a|Ђmӗxy.&ץJvKi5N4cxP܁hڱ/6[&+m6ocLgf`;s.`9ϧ؁ѥ䠝mI 9Zr;Afxot 0EZYL7zAv-;jFўQ [ioR=IQ/d&Ս2XkiS*D(\!r -%& 4))r;tћ j]z+kifWƚcu.AbV6D 'hTŨE3'`龂W7)oh2*|FFՇ mL[Y 3rF%#Y% (5v3  v &H=" =ۉVKݹT) 6r_2sÊ鬱o{J7Щ#3dx.~;q)rFșB6 C (蟨n#OO3uչNƒQ[ef(J/埬-)F~|Lj^[rǕ]Xۿ_P{+#l|e~wH`ڹWXGpn3/olN/zzDynp£{z>Ď8΢>;bj[Sd͓>{͑'XVC dIM7܁$Yn$Z$2pFDz0q0@ spr&LIX8GiǬQFYJy>%l<@Fu|9-ABnbL%Cp'VL jaU?8_b=d4O]Wh,7O`]lfRt)B9'5pC ,A|A0˱dL@¥Pa'Ӛ5)TD2'Tfi%1  Ayg;Ac`(`0-]α ڪ9T2Kb9b`g F3$I@ =? ϮǮ/5$-[$) V`QGNd-Sic4Җz) 9Z^:D-lnM[GT i`p Vz˘3Z:tBT@yj e?y{goJ F)x$T,ޚ C* 5qQ?Hh?/ —<YϺSTC;GYB^ GQ>fɬCH)0EJA)fŽgsPaY_%P0 *:mqkrTa= mN$qPϋ X|\ "J78٥)H1lU -tt4_\VʉBjʼnLgC(@Yÿ;5y,6 \>qG,ͨ3sFgb_'ogǫ]b$blI̅s8YT/v%H'_1bqGa!ԄI5cb|=&4 'hV=I` OƣzgKËbZݏd֦*"Z`PHQq%WMblRt>rEq憗guto_oO^y:9aDR˷7Ԉ/|8* vff8O* ^U㞌öŀO͝ɐ' G%Ø" *"'Y:M*|·E*j7ᅊ̶~U˛kf={dM&5%z׭D%ܼ=RW`PF]U;\+TWL + uujE]1*ֽ Օ@H}{J>uPpZ:w ay2̾y=m;dU%6J>[HO7sOaRF-R8X:[;1@%sa1u82KO\ֻZe,+e6 ʡh-2l FD|` @-Uڪq^`yTFD X=ao8+ijE]D[-++J`4)rcIQ£7.t&xXhT{2ˣZҴ'w[D50=L,gL\)sĈIFa:n.7Ynvh.e _L>W:V}^XZ/"F8AiMq*3GKb$5KWfG@5^ *mzzLf*NQ(q<5$]7c1(%);RB^'_[ho0̷%AQ[H)\׺ؠ hU@T)/g TC6NJA1B:#ybHuh* pR‰ A3(9!H nTHtq+l\m4 s؜t`ioFI 6{ #u@4vCɱ崠x ;5shw<Ŧ)/. "-(NG0#LQ֩σU sS[XA-5AzII >+Tp|GzR8#x bc@rPB}CJq$\@KeVѸtL{s6QJb|Kc5֗rc1_EڍE0J#лR!p"ra EPZ#Vl/c;$cP^ =}J{~l|>B{[h ruZunse土c@* \rmP&qxgzɕ_c,x;o \^^meI+y<; 俧ؒlVKmdOc13^7Un=i{Zk4ƂDTZi<ϠmT1)IbbQ\+;' }$ZhYO[R(b@T҃&y+{sY/mrN*SHCslVۭݺKe.6Kv;:;B4Ύ]ay$}t~:i? pt4ȜmkdFdqGi. :HuG=>J̑vA(tyKL4 t![(Hѹ eKLq(S9Mթ\w+!gJ׷af&6 qnczu~n^;ߥm⤐92^Y hL S{&8 ɣX d&Y,ց#p&Yq6{2Ȳ) @\%m=QK;x'x>>.ӵO~aC4,;l־cW ,K%F,um;}Ӌ8efs@ixn*2'4AsKo%ʼNf00ذSzs΅uLzn4zK;yNA]R5)Z)TYHۈ1ўF(ZEoAUv,{۴Qz8(UQC]MWt&vuiz-g_]_dcQѶ ^ւn'Ӽ(cB:,כU3|ZqTk~:R;kV.?z@*9o6zO\zhߴbYc>>(>H%:?`6G&>SGR4ŸӴ:)*yŸIKP,ƭnknF._\ǛX!D#4+ \Ć D:fzx*"^O|xm*d") Vo=[̙ "j& */OٻtLF0&錇-2&YT>C}`F9tJRT"W)VSGU%._?'>vl;6/﷭cqm-;} hZ?U:%:m|MbL52%ivy~gӧ>}M i4'5Ѽ)c}\ʑi8hjb>lu&8Aa+Bl\bZ:&,6VKl1d9:9Ak)Mbd, jUiՋu6[ D .5JteFƃ!P;3:_3&Χ{lcG)8'm /S^ 跏 ~EfL2Z#@1GFa&ȸS歲yW4LXMd{o9*0G5Pb[~;7o #䑧.5=ճhܘK78 VMaƈw7,^V;j6'[52(đ#-dQAy)xbS<Ǻn(;oi7 Z{oOy.REj!8PsQAOGn)]x_ͥ-+;̜mY43;f02q#'&pQl _K/" PdQ`@: ٮR&kb[YLHKBvREL)SPX/ɲUs!@ÍfY :iWG}5uakԯCF 0^ "V["b@̀ɌrR̢Or{*Y% D@P6*I` F - ,iZ8 Jڵ k#6W  묶JEYq 89DD^4h۳ $A1!G1)pR8Ue< bOK5c7k)X[.M'2:6[AT1Aj`RR|z[2p>O]R^3RB*p&H?᭴Qę- OQ娣ڲU8"*@OY1Ҁ *;DqZl5TSgܻ?~%u]_)antMN(, A씾ǨyTLI%'y! az(Y x8@O' Ր'Yx<ǒ@tѾS, Zk+"j+Y8uue߼??w=B&jLu'RL Se5RAlͺM-:l_Q82tH7ږV%+ ^6jktY_앲Gs(9Vhgd<0Hi1hb oBu}\d0+G_%˅lX̒~zSN҄'!-`Ue0\O骏tzI_JGysHxf;kzh~lP?MD a:ٖ&zfmXh%H]sfqE{h%\7kK))QuIĽ;CeOvnk%Wvb8Y$ tLƳ[ ??_j1Y|3-sXFA\nøMhKݧ:$(c׹+o%4M[UvRǞN{pj|ss]5\,^BMCDքDV譧RdJ}RQ71+-k{=jC 7m56jh%WO쒫P5+H0OSIV䳮C֮0;%@ؗT;/9$/_*viXribzgjWE*:s'q3bF+idq@z:] [tBytNGQ.G|*WV`7"=|]L1ZkS /.WQⅡT`.08S mE/;4fٴE/d4痖QNrI`CSt%!/` KB_мuwmI n;R?_q& Nn~J\SBRvW3Lj5)"˜fwMwկQQ(O0=<-[VGZtTU%("%qEŀ^(0ʼe QyDSZ.~D:υFWQ#e 'Ѩ9v(Cdll.(CS0$7r֌U_ΜO,u#v"S25n8ǎˇթ޺JWAx,rcYh/p]tV~-n+{eˀ8pOc(,'PB4Q.B$8^LsQ:/HiLȔGTH|1}sT%Ϊ4Mñ|p$aJ@ߚv\L8Wp a_[%%?tQ \?U(Owk. 8o\NlG'ǙhV\|6PY}mҢ, ?2R-iZpy_MɨfW.p8d&WjTqNmibvE9Fi889m斛]:$FG'/;@HH{$wt5 Fa- mq'Ƶӆhx9՘잗CsTFu]vfRŨ|2R\E}O|bq2lZ{~S$y;6 FOpt巯_o_/_z{L>~o_8qS8"~ ??.A^G͇Z턯1>_ccY[ف_lE8WpQv0Cr~Zn~JbDx ףPW&XǷ]I.g@fEt#AJO>h_ԧQ}Fd(hTDd)8"9l ,*Gw(HEJ0,>i">ڥ9"וvLhEa)sAD .$479P(8 $ |˥Ny[y[>=xc{Ύ/-n;nt_C/}% hO{R( LPRHӄkvPM\ ,OIUB>)nYN|6-و ۠R)]HQ=I΁i%&DD2:DJhEXe,v(vVifKN.fTB"@5Ty X'шOd惐rO TѺ×V+ Utq6mgsݲ]?j(*s'QasIikA wԼK\c5IӡpõMJMotJ՚w<=?A'L%@q)<&ṱ6nYL1Zcopnox1$SI HArZĜS?SR6>{#gyv)͆>tpv>_+LNX,7,5nx0ܫsv1LJ_ k%`Ns3> @N!P^Hx6NMv)63]-R;WҿbȽ%TSayN$8I3.H@=GjdOZmR$v]9C"e`T<jrLFΚWB.rs~)HD}%e1XC2#&g脲jDKZy-&K $MZC! ˑtT$HBS(9Ή",ZPcI_OP-E FvrYx'O=h6,Q  MEs6]LC^z*ysj \ pN)gKbF"gBU $R[4^QjBH-,n:$NBOMd 6Qnsn*TMR}#coFtް7 YXHXz#Jw4 φov8/et"M6F:\!84jΨp<NC6l`;8˰ɥ *!FD&"苜Ոƣdb jl{ >'@` 7UN 9&w ƄENJFQ,d5$Km$6B&!q|Q3VQ!Bi/Qq9 .$G:*k{YŨf`D?6ED3"="dl$ Ќcy,MQ.FD ht夬u!M H Q&IEyPReTGKVt79iĥvy]:K6E3.x{K=0 @e6)O zIPIJ a6pqW7ex(6-@؊ұM9ܲG~w8dC73y?"L79?E`=8=&L2l }5u(d{-9=wz;HQF0N`'6;yWSq@x a3JLA!r?fcʦKnt `?,+Mޟt2^Q ¹r\Xcyu4*g!la+ye=03C"Q3U2VSxxb \ |Pk4"aWf}^T\pl 90vwwo;ݴY:(§u;S˱G< ,@B*QTR$J| xϽК(~g<=tBy4qDK!@XKGuIk5*(A̹Rr<}%Df?Yk?wgF'/q%*pNO9v`r?$QƋYu4i/IV!/z]u> "AJQߖ&=K}x.A=q+o>9)sA8F(é`"V'4F%B/ {8o:]sMk-9yP$Գ@&]ss>\vѽ].@wl6z'-mo LR6P蘠j^ ۄ1oӆMtwM롽g)Xx0{̲dg"8؟lzr'0CL&H2 པL®$0N;ǩl.x@By 1ӵYW_x6^ɘX+u2\Ý.\:&U96V6G<́>88ߙ }Sۯ^X!Ӌ%|xϫ}gV 魕 {2)ik䫓8¦L"5` "gj6JyD҃cfdŷ6,{m;.rY˵pAb"j24q0;c$:'-څȬӮS|҉7xu] Cx8xG #!u頪' ~Fߵ:z?u_.g]ZG\ZuK|0oeaogn-kIYkmE)hXݛ^L~*BB<؏ƣr/ *[MYEk^|4O~ȅI R(b ZDkpVĠFؒZY|_G.b98 ?!0=֥0[oZIJl%e9%jhGŕBڛonMQ}rj1ZIiOj„W,AU"Eȅ5H!pNLr̩U#翻 y> *CAY.pKL ^mVO9kR_ZVL\fy㷴Xi^1bTٿe)EUX)ynZՏcdٻ6dU'/،!@Xxص xo OD*<ȋ|W=)CRPbd ltWwWUׯFm28ljz3+O5*T8 엟ϳpr"(r(`,2,FgZqs0&^{ɘ]. k_if°W]h8a lU #B>땆ʾ "0%+$%U^V:Jf,F'<7;6@fkp]+bLߦtf.W̢J|Ig%tutmv77]wHF|`LБ30vzrtjeVyTWd|۝ X f1 崌-U<T +dBι`1X>^X;kaE>ג)E 0{,/c%EџwSvQ(+wQm2m2"lۤe)=t4@aٙIuoRN>? f;0B䌂CsKFD¿`JmcJ.\& +kg݇vǚcu.AbV6D 'h>3{BM>M 2a:wG70*|dr-6yzfǴS / N1QDh8f`0 @2 4KMTzDzCuN^I՛Ur~]48{,ofa;7\>nK" EiZ&# Spa sRyR &H`5HSeD6 w\E)W?H6a؎\)p||VJDwKm܎Jf6 eQlfK&ʝ,9 ;ogNKxt]Ari}`" އTeKg9Ͱ|W{(Գ8Lz?4"_>MTl5 Uʝ8G=wozk^:XR3!O'pcv7 R)P =cOS%Y"צrP1y..FʲQ!ra݁C (9 QEB;lHш(FtbRmGB3Ekdal`62}@G|` @`aҎ"ߊO7 vSUV0)[)h,Kq},Xd<S#νE/I:WV (VW9{ₐC\,-W3Ҫ&i xy}r/Dy8Hsa8"iǩCq씣}FqҷCN6'):Iǣݵylxr6ؚZodI$IH'eZ9a a9ݩmu+0z%6!`BL=Jt(K) >HAȳmY`7|7&y_73DI尴ȗG0[bnׯu.n"+-էWnL<X|*ÌRté9'5pC _A,{jW$ Ã)`X0")9`8(fEu%1  Ay aqǂq/CĶ)AXEq9f%Y`E{Q0p`ga]uIK@$S 4(Uɀe"mFVT)1 m%=RIF5!Fls#nQ120RxX-cBRc׎&^rQ2v䘂ߢK)((qy[  N[lB\:$UXL.,!E!_^$ 23#ҝ\J(QLY2dQNR+Nd;vk&vMkAp+6 M`GƩ6PNL~*œ[~*߼.^?Vg Hؔ{{˱z:3N?违w~^C%[ik6I{3ևO0e8zG6{7om֝lkVGi#2R`RՓQHay88 ArFV*'~)+י~v߿!~o~~}_ Vo`m$"᧭| 47=#[6 MSiZujh.CJ7`% J?do'{#v9OY5gb^ <eTqqCe'!mČ=nLcrwn-mN}"@ {1S`@H0b91 -&"@QxF(3)sp=yy!>_Pk=O6,ܤ ;NeA,J=ltWI:㉍ SL2ԷQMvLSziLp~y/ >yMpyҭ90Jao2':;{)HϽh*Npls"PMtL, wuA753hiݩP0T+6^NyA"xec3rV}5\~vz1N|ABGi$9ʄ:D0’Fe "8~Hݝz{h$e$3V:jQ$.5ނY{з*l:R RYU4.,tJExlH%17{,ռ?SR\9^ƢJq"`"TA("/<}kXpBkֈ!*Ռ푌UipBd+иox;grJYdZu?ȹ6v9.LPix`,(3xn23ŃB#?ohd)%rI`^d?H!$bHAX(X!cTY/B+Ljj^!032ɡ`N?qH㽋"!/M| FTm]3YGjwxqIo_×'[ژ*iI*x<^R?>s͝zQ0I(ڜ-f1OaL0&6y^73a4ͶY,( # qLqdI~N\qNVkޣ;VC+awk̼ í:!H fni]wڇ@SC;bوfWp =qf^x*NLP1)|#XԼ ;<qC:xU* :I_ ܚnDd*:%Ӌ$$jWRw] |DK=P: :|Ƹ~{v2ܛ8_>켤HD ˀ\ΐѹ<ae70 F#mQL@V+X%y%@{`p,P g@[InO]tvOho#9cYf8o()" cDH+4҈J(1XbR` n DJBԀPEL3"g/^':kiQ#}-غ^+ϳi1X*;@kx}R?^ߌ;@Jitrߨ(rl1q_`,0*(KA"YD5䈍WDjM!5DʕagLG eZH^ˈiD0 i쌜5c_j쏇wE ^n{}S\[nz&{Q=k-y_&aFwq+u&ZfWT]W=u¨inm"6PܚoM2WjRc lP:ES,zhEMYzvhBJ_}PʫTkxHBr{2$g9Ͱ|Ww ͆OO:ޕϊYYie_7POT6-^SCaT*Sh ;::O@ˠas7fwɰ!v8Ki *|ף?6z8 thw'RL~i?]|;ǹ q[B}N{Xf:wSdSt|l|l?N/v 84toFr#RoIyo0$DkbBuV=V)njN;j0CJf/hا9k\.>f& 3 =z9ɵ5,A{O!ͧFR~r@jhE s , Yu9 Gyi[Ƣc\J\v`zbҹVpb#V5UTK3JNDX.8Β܍Oۥg3 {$*8ʝ7\ 3f4j|d*j$7^g&BS(F!C\6[hy3r11KgX|(kga ίmS_/s%aG>9qYR>)֭sxQ(H{57C[>Cuj.6,=c"$O5lyX3zaֽ]iG+DY{wJ#300 |]SjM$ն<>b$l% ɭf2+*W/"^dv%EV{Z%XcC~k1QDɐ*5L#{K)uJHLFx#(jК+Q2W:V` q^r_2΅He8Im#O9 J3g$"t|%h䷁>RC=R3it;Cs 6izoUHћAp썷_~jWqp7emeb$_RUHIU-:Soz4u }-.fv2A;?}+" "6ltRdr{Qe&kZfe&kZثe&Ne&k Zfe&kZfe&tƨ\!Sj=IPD JP\Xd p4^kfE4 x$k$C^hgyO\q#hq.q 3`};ps盻?O6;h|TZ|Q"6xVzHfhIe~zf㯻6m%rG֘?9Mν7ө5I ~o6\e̗@˗l^[LȄĐl׀zNV IakI4s8+yr%7E6hwp˔9(EbhA!sM Dy &g F8Ɋg@mW"_<^o:BBܸm5VFåQ'Y{o(y27M6vIg>;w=M>ҋ!>ȭMrV|G=yfaCl`sfz{8ɎNl?v_v-ltun ~wwޥR!no`=eOJh6Ͼ}/Bt[MQzy)_4Q}p:K[}o]Ļ[S8nޑ8ŏe'M`˖WȰ_up0GD#y .F@P5{7oWp'WKƧt'|,ji1N͟K[p 6^)$,Nka\Q-ǹ>{+~=ff3i,-@ >8q JXF6Lkހ%qֱ$IOz7JOt|m'n"=uSG"!9j\J 4I6nYL1zPVJ˭ B/@p-mf+sPy U?;R `u˟-\6̞.r &Ȍ-~4"56@qF;mP*M*2<;-uF[,4x$jI5'dPh8$ ϑG>YSr'(y&_{u9i׃7zp{1XC.F*M$%j eV- htJI@эΣ2"i?  2O}D1.L9RD97j|?nY8 bq("ˆ+"VDܶc&QP Mn΋`ip1&0g@F9-\CSB$BeIRQq5D%US9G-^=nMkrbj́뫏d4}5@F6_kׁ}~uxmW>__%}[ǐ.t8N G6o>ԑXܑ0(~(n[}o ik:<&=Hŷ: [J']9?6ÔzLzݕַnc G:)*}4GA H8΋BmG4^+}gd\-uRݽcdGmw0_vh5$ [-<1$$`N^E痙M IK-V8%AR%(IJ6FDB=Ya`m% itZ&F5!81*$XLpz AƄ0n7;g!D\FVn⧎7MfL`~k>\RWB^a3|/2uc#G~p,WD(bѵd}C?6 &!0!Lbjf>dr9/?qyW ԩ 6!n"pa5\w 6'y6;= E.@ 7 ?$?wۋ꯲Fbz>l^Bjʼn Cv@dUͪɁ+RWq,csqrULGWT>#~-o\. M&[/,Oϊzն QͲw@!_ݸ`KbvIubZ1T#0ZYOЂźYfjJx:^Q9JRJjW6qvaC˹fihc8%>f(T6,ftCQ8+nGv|~ Wo^G/O~{G'O޼hE~yBA~}m.a *TMD]ֺl r*{|tD8OfǶrs8z2ssVH\VB/?FzԄ88\3 f>-2-Qxרq3ydJ wy;Z vAa.һI ;b㞤.A2&fQ.xM(`Ll{hCyqn>)P1BW]5aXpb4ʼnKaI[IZzMeT:hQg'uZjg:qVl.W[m==N/_׮_V~_<'=Ie%>ǒi'`4j3*DJbH=14M%3?qN ݧ?4L{#p/,? "U8Plآ4Id ÉJ~l>-y殜rxU 8T5մ+r*m%#{x+.:BW+Qg*%tP2ҕd.yWY6n9]Zt*l =]])%]`3 p ] 4VXF ;e:e@ 5o +WPre-QFV5x}!!I(*B͠tU%!!C}̨JJ&&񵂿+cU pcq ѹMDBo&.'ûlaoժa9e,S(E7]T<6f]D7*r0<+<΋:PɤrZ` 3_'0UJ9Ӛ}J6CPe s4lpΫyخ5SەvUfAO׹iFY\5댫W-e_w6߰jVճvp3C˞)J!EW| =]ݷ1Lu\*BWm+@)j0Yi-G:|G!5s1_}zsMc;>;WtZMNӀc>ҴԔ4xR5˻=%/jk2P"AXp(l2#_7؜fblWA" p ib7*يIQ2C pPSgFժ%ѥVMU}^Ջ&ubJi3EK Ƥ JШkٙQRº2\}T (G=U1u0 -tZz (5j )]`A]OAth%j=]=]] Iw)SWtZ%NWS0\ڙU@[OWjJI];\BWBmR랮4~cm;ҼnA{N-1;1-4 wp{} Qޥ=o+W-ŝqZ Uv EENWϽ*b+祫>vn]m,&+}x*ֺ3tp ]VU@zG" F"mN+#*FjdnJ!;Xʃjyإ,^\O҃KsD]&Au).l V:>J{GbX4bWڇ4w"k,mۢ\4õem^zџp2;+%% :uewԀj`&1)T ¬ *^9|`( ] ,Cnkx\<]T*S*)9M-Hfe*fJ3n,"C` pcP>+dcDSPe !af J=D l1j᡿v!it'sp,f?д!i $)*aGIT1%_xEGCTw7^.gæcvDL͗qNƧgERjFp$fٻe ԯn܉^0IR%1~:1dN֋i*,'hЊae˪+xqGxQV-:ꪈ(_ϥν]^F`"V/>Q!(2/ʆ,TN|( gȎ@::y}oo߿9yurћ0|-:EAA(:|A>P4ȫDS6vY7.FMv\n>z ̽ ._Φ(yd&h5jN`L\ 3֟"mYD머(dQ٩+N!ļAM)ʖߜgY߲֯/ps~q7 ?vw$Ž)Ƹ'K`$xGL0 I B(F(3*ڞ)uaknw'7#6NīIs˧.YIQ ECLi}eΒRB !T1Pt/4g E*l>#CXIOF 4*KFET5/f[E"s$;xr׳CYO-va0 _ֿiwcFx;^Z~zhÔ@<5iZ;5漚V]u:~#19bSbX2'=YblhLqEuUbpHRUJwn`}8͜cH`-Y+1JMJΗ%i;xAh V{Ζ>࿝鐅13@\,҈8X$Nz,kg1|ыH݋?;s5=R1)mV,2 x<-3ca:W+bd CtS`/dm !<3&a` "||~e]AM1nq'@f5tps#e9IwӜMsTgߜICs9dsWw`s-\E 1!cj5NyagÓ!5c' C>KDWY[UYcG)0JQ2ͲW,1fimf[V2CC @[vr}Ǟ?a"ڪ&5E/=GC }f :OWdtE5/J*I0( %b50qVD_@B"ߪ/ـ8VGV*2vn8͌vU#<\{S e()=bp#Ji:=V+>SR;C *B1W]g']ҟz,?E:Z4C1ht:@(%LRZ@E!Au0cAWoIIDN%C,62rq+0ATjlLң5,iH0\%ތU%&ZcҤ,ZD:T J#fh7[vTmf[2hhJA~5IZI5R,(e)hcj [|qis82&r-g$S- )Z".`:֫3TP$Dvj-lhYllo#lCՇ Qj[S@䟃M t'vY߾m=ow 7rfdU7*.>tMB&^({LAT@e eTʨ~7yInG<ͧOzvue<ݖ\Xd5*SE{#z65EgY\VY%тTBRY 1HF}o:ݶ:ۧWu̥|eTV>Ջ֖/f|_4yO(ut痋ͿI>ۧWx7=4dswŷ+n | ǷN׳ TVw>{+T7:}!!f.6w g?ȇ%K@r+FP/8uwWHب5GVHP!)qݡD  4!ox>Ӽ"c9@r'+ &vPsp?Qh\Hܷ}6&fb-I0/%$Ȳ6'x3uRS'9Dd -6,9UȾ-|k{cvaΟﮆWHtV?&|쁡Cc5)]^;ZȽMo:؋؀ۨ0;|9G"_9|X7ߌx=6yUouuY7c:.QO ,z,E=qa<,n=6fGeϪ{^4ہFa:ט߈kQ,X~7ZXMZ IX?c,ک9C3'_e,|rHF$.Bl,ֹ2ʧS}v>E<#"G+[\ Y]JTAVeϺH%fԪROs_ ZM *i19d5X6-,BH#6b(Y٧M#z:/9yO]{'T]ϭKsm-w|s9?V݁UϪ7c|bzFeTջYU_: /IIDk@2kw(YJ[OtP$I'tuikmNuj{A:YWٮ]s+ipuz/Hϳ:.FpTc KPBA%e2($H:ےBX-;_gNϙ9Sy# ddu~3'laElr;q`(@NI,"8"{Aa41Z,Ry6 r¡'( Ц܏ή~m@R{R <x)~*&_*1Od-^`Z$ j0eRÙo{&3ڄZs dNxBV$j 1+`(JK:w߃C6,-T8C5QE l%kE!%9b89J<ۑ>'ljLcAƃq~Y͟_9U0wJUAM.4c  Jxi\ #yE*ϊ^JvQ%zAQ_>ZmbZ;߉hpaQ7a]H4$3JHUjԑi%JZ`=r5wFj,t++Ͳz L SCHVR8\u21fF%]VX;L9AEؑ{?_u /FB XWFzлO[si4%M`Y(jk@Ԩ2䊉)mL)9sgE&aE9gXIܗmFN6䶔4ϷiFgކ_o(nW|8yն䁒U- SYوUCYP,TV2jwT $b iPlzjnUPhjD]ea#msĉݰJ7[me<ζ[xR[u!#:NNl4=DnGׯ}-T/!PF6J5Xe`%d|e*LJc^+fD]^44:%U3ںl1Y,Nmd0Tr 8;K;݌<muV/J+7dG `̭?R&2Vj"D63Cfd,:B@l8b*dL=&a{8'f"v[ϵEԃE,_thC4NAʦDjeTkPPgO A(s=RZl5!uCEΤjϙB]zsD"UW.NW'*y]4"vqV|;P"jd]\rJV{If= 'Sу]<]<{mu<<u<.]cC% ُohw4+nNi}/4iٟ&Ҥ j8M'7w㵣*:[oM:hS,WK6 %сZUCrsᜏ)g+i7NnF:K!/۽erx / ,f3}X,%ʼn{}K%[Uİ(!ws94g5}I+ϩ8nnH<ܑ0({;~m7ibm.6S>1l)Z*Pq>&\RQ25Oʯo1?[~*t6vq`il'9%KJ't4qtސzq.93a Ϳ\Be?zHݘ.$jLх{fɖ>os4Xoo7 ,R:jv -ަǼALfjgaٴ .֝, ~FwrsbyY$Sw}DLre@k{)@®$0N;ǩue8D9)4t'0E''1ߟPkWoxNi@j"NV?W8Cu|F B.8M] bM.zo$(]^|~[W}EfҊ?Ҫmlyb ƅ}{Я6{o[*.lKӟnz%rgB+ |,OSheq 9Z(-Pq)\t)` -`9#R ,TW(-'/K\``x&p r> JB \\pR*%(}pe|xcGƿhX7SJLߢ2/cO@:;')ВiK>{#|0riC?\p%HơCh`E 90ja-Vj5 m:/Vvr3jY`MΆAX|\&ڙS5D(J^ l?k[_ٲKZ`(\'%'Wz\$L3+sy6pŕ\*KkJ TupSq_pXtB95ݨrU >,x '' zx.So<Κ;]̼s!]U}\sbY`s>0 YZ.Nv0a:'~d_Aˤ~70{їa>3ZEFX,T<=L X !Aj:=qu\G,L 'ڽg6?&u &b4d:8Dw>w;`\<^U;W%d*T^kfE4 xM aNd4@)\iaY^=C{5nۃA KH!UpY09˙1 GLcw.~|Xg:XfH5h% PRj"\v4@S O,5T㾃{ bRJR!(Z'Fo I9i5ZN08/Eq!(\Xύ$!JʘT&He#1h}b))Og{3Dj O(cmrun4Oz~l\Ҽ煖a<mo4.Ct~4Lە~iy6oNz,%r䛇Q7|;_.>v䶳_>o!苖[gm&gsm&JKJN?JԺRxQ \)JZ(Y]g]sV@b Ins=<֙""'w't'q TEIQ jxʱ`!<""Xόtg19CVabCm#+)I\}KTNHLK/yR2ёHRU >P1O!d h(΄D8* Y@/ܧ8/ǣ kUycGwÚdD}(s ,8%Y\c%s J+<%YJ.s\ gW W0;zp@#lavP'Ϥ5ݕ8[TDHE+Ad4Ζf*w*O.:K$.͓%TXt41N TRȝ'+xJԺK^:{ dAq_DCbTF*M$%j תeVd@+vs?"8M.DFtT HG4ǟ?h>E\k4uN)}ZL 5^3ї|:Rճx7ɒ8Y0 F_"ˆ-Q̀&J# 鈦(Z QsFø pg#9YM.$LP 1 :00L>(]ף8ۍa<.ڬCnx4 IxA1aцQ$1ŘdEU2$k|>Z )yT5W!JHօ$hTGema<,&v.<|> l-}W^۲9IOjSR\KoSp@I|Ast1?{Vn{8|@8k~Z(݈?_yj};#މ)qU j)PĺMgr]HtfegG֮[ 4 z_ۇgg6&g:Ԑj-a*drm!s ̽uC>ա1{)ԩT9)>\MdȫV^Uʡgdk6/Zm׶iV ֍vDQFk1b$TG.?\H _ꝹlJkm5LLL_SLjk=ն/2:F01vc7cv..bn({͋Z[l&䔔;gã5Fk>KWá)f`'/=YC&S$sxT@+0rixЈ*T{};;. ߨкGJG@?Ä#Ut\~e|HE„oڋ`3<%cNB6`'Ss.&hj[gj*uϬs!+-霌%x v:ba$Mѵ5|'Ȧ"bi '̵1|Jx 4emd#4V\jH)Fd=b3 p\=H@IcOi BSsh5S4hH1KuX :ZnB]oOMCw_Kͺ#oG Lڨ/3>*ҔA۰@ʑZ[bݫ N7p Nk!N8up;Gmq66*C dG<*L%n2.)ϤK 5[]QtrM"u(xXIiώ(nd,(GeqOs%E jRcm-̚lp`W 1\iuis+6)@ &5b|e]-r3tWh%Wf AX'-@ iVnEG4!vT `m ۬/%Jc :@մ ֑(v_@*A5TLgf(s$po%W{e][.=#"/z,($f'Ԡ$Ed8$&dY2.2Hb۽B6ф=r+P*Кe8M&2Йy;ܠ\ºWQV̺HNP Ull^ #&D_fϰfõc]PZ ޲ND="3]@!Ix@ƛԁq@y,}l:DVYL$W{H@+UTe ,c*$OpHv9O5VYȃV6 IV-2)2O6dc[HZg4dA$>yL_6 5Dn#P܎m@}Ï U2YЩ֏oT>/_EܱjxܶAMR,tY&Aw*<}XAw&_UϟO. O= l2OXkxhD0Hc ߁]jrɃ^ 9ІHTʗ蠻%_>lFsE/S!v 0FAnK)AZL%)%ܱ-tT(.f)cy(gxbi24~ gEȚ\\;icm \g(Hd*JJ+12@~ЃRA*8@C8("ThVb, CiXYwc;g٪B rHqqa5{-,t?X-+Cgل$- h2+I i(ڮY^UՀ{9BExƂZF7T6Pª(jFP_nHK)y1ĵU9.K|`u|g.+[_pכk0WgH؍/K/[o _nOQD>ۅF^f Sc)k} SzJM+E=&Uς+7>0tՂv1 HI ߪYj} $uǣ߇~k~I`^w&S)OL2؁+:A~"|:[fת)@LM3^7)w-^RіBTsg ̯nNz&<Ŵnz )R@zF5R/Qr g]O\mgpz)Nʩ SSDJOR h&NXXJKV.P6k1E1'!} dhQO"EYm .^]`v=/ͷNJŜWɲQJw66]RMQ))p Ɵt}t}{w}5f 8iۓzOg/^_]v"ZwMCޞ{X.C8ϮگRp(q ԍF?:ttgtK{'F N5i<uG4_2ާwOA=x{Ǫ||t}S55qyj\O@z4lcJ"[@̯>-;f~y([\Է>l]EcWI#fq{ff)̓Y7R' D%tv@˫zV'`g sU)P[9J9G!r#enYʐ^-kzC@ONFH(ɩd'!Z;Lz!dط'j\Z]2z{jyֆӝ>` ~x癫ߟ~Yk֛W?Jrw baSoS>19ȿWo_a?Xl>9 mեw O_LGh5< %;Ky7oi|mV/=kdtĂ|jA:Z&(W^92Zq %*㫍UuИN-ǡ-qp@=̜=!ߢYFŕ'R fww=} G-{'Jp-$3_d:wqHŗu2`'`<| k_nZ+Xo DԺKX 1-#UȻdY:ՖtՖ;lKppjBAq_u۠ǘ2ua;\hA"+ꍬ\ٛ9VResEbͳ^iCJ,H,[=JXT9|NEʨamU;ҴtS(g`nc.9JysiIzSs{SB-DJFś*s{V[lΕE __|mSrx!W}ѭ .ɡ sߞ ٪XWmv(#V郈ȱzQޮf*zW˶OWpE-YD@<2K)ì-䳳vR/OqZnv5z2YKάT+I'XnW7ĭV/_4V3۰~],^ ϗ>oBGG'kiB!Eg9>!/񫅃ˇٽڛo'~Aho^\!Y{bq9 ?-l~'o~u`0A] s߭xt56GOoʼn_>&LJG"wH'cØa溲<ֶ>fpޞrϮ\oy9Q}˗u]!F=vi#CЋ}Pyz^uы~gO^^h_9=dgG ?xi2VYO~\ܝnvdv:fM;sWBQ6銗ycJ 3>aK)VVyo JG.6`Mޓßk'KXCB E1rRI co8zg?x,L[bm~R2-[loؼ" U\B6d{UG4GSa͒w%~=e!Ok@877/<5;sFϯu9M/Բ#gkٚ$i[.kg\h>~{oc ҵIY)0_KTIYSC/Ng.E:١3u gXKHB:6j*Ac ufkUz)\J5\ PgGʑdkg,^I{OqFi)ELI)@&XNU-I%5j ]^u%'+>UJ6%K|zr"X.twc}NiU]ځZ䚦ܔ)&Z kt_w0f,c7kh30fw6D'MĪԹ| td>}7,ՑW5Ti4\Rk;=܆C)KBLv(.ca,WCc,ܰа*)c{ Sy oTl]C*9,6<@;f]H^>o^ǙbagxSxTJW٥K LA~sT-Vtfέ4j%uj+鸬pr=钍{w:l_R&mIiQaUR'zҙJwb=7 Cm.莸air*)M`@4ڤ;:)yPxi,v(*5 :݀-B X v9V# %eäjD)SiAW/Tb'wb5wp &eLXY'0kf:uLR Gj/ 6G ,(YhXk/EEc6 хknU t2 3lDŽsS|꺜Oj`&؎\1qhFJ$L t(R!% qa `ɦ= 0{!+4poAS1)Dw3GPFhҙ% 0#|i[mpPS* X7+_CPqu9k$VOJ@ )bufayŒL#@H/>XРy":((5S  e :P# o a$[ZU챵/P<d;2 mb^uJNOcbJ-(hsD>! 指'ؗʎQt%i&ʮd6Q t}1[84*`  b= ehA!Ax1) 7DRKIZEӛT0:x  L){B=ѓF Wn9iFN],g;V4]:x8;&, hM]Fpr)m 9p|#,g'"2x=vƅE1PILͤ+(-^ U8z;SZ5anzVHu26 N;Pzen"Ig)m)Hh03uw6pT+u譪WXD}TX@DHXM2 f{h6Z-Q*Sr0jNB]@_ƅ X;;Bjm1܄k;녙dga@ƒ)n;cf=+-(C۶?Q<;+.av ӚYkIjibddA,o:A5^5.A6\ fx\X@ҤV&-.g=l\H #Q IzRi0kz&`)i(6LqJ5󛉷 Il*ҘDDt9raDǢܬt0C`$>qe kλ&,?Cu=銅Q>#Wb`j?&lS$u?-׮C4,]`͒KKƵN %l't> t,(fS;B!7[a3^~f3| ZWTׂUs/Lj.-i4yھ}p82PDMPR,$*K 0^)G[ Y)vҭl1>w~$e#*[l'*[ 9R=AR.Rh*r=ڈtmsS O&Ξuޔ̲齪~*b!oV|6mO ʾzu l$-;* uN:0I}DxǩkdE[֝B\ un5Mìڞ|oQ%bm}b[*_]s\bb1WXU,*s\bb1WXU,*s\bb1WXU,*s\bb1WXU,*s\bb1WXU,*s\bb1WX|U,~MU, p_O _M Hkӣb)]X|U,p&FZ"ea)iIK %H_k:^Ux#s]OGTAS^k6^^E'\sH 0n2nȝ)JyBJQdjlEd[tQJWb{꓉s}˳ק+`͍ukkQ$Y1޴byO受uCyh}j;{NCOr׍а=z u1d2xp]&Imed m;6, BX`LZ˩Mf6$JdCJN2=%lVq3Ozh8;HKɧ4.u Yӡ'}]zl*lFU'z"D}>kзF6(nWQ,ƺJuژz7׾sV52ڧۘe}[%WXFmlio3TڄGmQF{&$h`@e$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H$H =V厞q[*g~pZM +̋A6PHe]Q,*ZNzA{% clEްcëwNH?,d7:(f+{?,"WdYz-+at^Mq{U/c.W{7{M*Zz\WU@kTRCYsm2\g C}`ȽX?5zgV፳mu] 5[hʲV8h-8Gcݣγʚ{F\}@4^/.l _b?g'?\R͂?۶YVaǓF6!HdAk1 H Z"1&68% 0%p++m]MGWQ~zFB iEW@c+Ft5A]%@%F" qbӻ"ZsQ]}2J;}:U*F cz3ѦzWC(,t*-zurG#ltE\tE>{]e]MQWFCuE^EWH]WDijk]s-'[,8Wu&5&+Xnב)|_{-O 3-_mӼ+4rtܸӠq MpwuhN7 U>[WM5mYW˛+zX0W5勫^n_/ kg.])~شamR.I1;_̖\~_m֯ͲMkD́eEß݇ԩZ[P&_ *6a3[`ԛ@Xu{o(c{`Pw?4µltE>P]MPW'7eꐻJj!0FW< #ژi+1FBd*&h]Wu-%f2'FW<`^)Et5]aOHW]!nЉc?<\WD뱊ʪ݊ުSVѧڇ2 yj mg0ڸteZjaa u+8uet/0%Uܺ0~չsOuWtݔH*kEDkUeo?hzB3Gtv«zqQ/Du]5I'\AE݌mf%TCx7(/îˆ'.e{{M(o\DWt5Th~/\س{{m֠]Zv?PM嫢-F_.EY*(;]!7麪δ Y5E :FS6Yr2Q{v[ʮkN-Z ]JE媄NY`o9 >SL4  ]jLpBǹhHW<]n \td?#NDWԕa~cp-юy exSԕi+ + ld!i@J#)b"Qle+č#R M*+4It5A]4ՎO1: z$]mԹ )2 *h}d+mp 2Dg2QfvZ]^=8ap c] z0T JiVЕ][4b5#]!pz`cpr2uEGfƞxnénp!)yvb. . 1*iCAMn \zHu${6-#]p6"ܱ7 erQfviti tEcg5pZYuRS'a-rlJ(e08I]b$ ӻ"\f0H?'()**1]XQ&'1qz2 7YgTP:+i*%ߋ_`jS=j~E舕`/~[[\g7(f'zidWi9K˶Ol |:Wo}wy zcd욝 aIП@cN(油ًz=wֿv>W?ޅ<'bhC.)Vq\z Bi x~kUNXףUm9RU].u2JllNzqkWTI%;/~i `[7{埜9]gB}ll-^w7|gqvv]{ MijjivHu3-SVϷgNiNAg0 ؏=o? wE\hiU0 ̀ھEIWZEWD]BJ^t5A]"#]]Qh=+Jt5E]OdwI+|sM w;u^G 0̪]'VMPQ^RsEE?-TSY9sWHkј h6c6ueF11Q& 1ep8 `-]u\tE.+ʉ&+gb )2ݎ+g2lzWDl"`DWԕV)N"j:RMRW~}B` ]!pa!+ #ttAYg銀 _|N^zWSUzWb+5_|Nފ>]ޞ*)'À Ƨ|@WhHsW(s;KЕ][ie9 8."ZsQF# 63ϵ?7x71h0!FWkh]WDDWSUN{HWla+6"ZrQ&Yg4E]Őrt2,vulEm~R:DWUDNB`| `h]]WDDW`ǢS0`ppeAX_0Z30Jl0t}^'f+Nt2Ji."ZrQ' ;M2M'&-j.4}tɀ,sL]Wtlta1Nuگd=Gr #4JFJ9Z[sl)&}ܘlDkMpfSZ1{>SL]!Q.2896B\=Nacx02Xu啳 ap<k'0EWUP1e|p?=m."sRj#)*b[NAcZ6+6w]eQSU.xHW-]c3N1RbJtXTi ञ+mJt4dF8ÿ1"Vb7CR9vi{(;M2?ejjMԎSoRaӛ@\KohMeo\zM#]p ltV+."Z0(=&+gv8(6"ĦwѤuENMQW֜,]n\tFuE&&6%HW)-KW6e+fFNB৘>ױYI!+LhJNAt&6"\o i~Y$QL:;?U4<~z `ȺkF;V0ܞ }^;']3ltE>q&Ҩ$ Duisn9Sev!hp.904^E6&\\4Mr4Qz+CK~ ( / (T}5 <݆./ǣQrC(^]Ⱦ*^VW7WeurWlDV]}&(4ZYpFiHjVet]ej۶ T.4kbԠF\#↱ZFk Q(# ;tZ6"\`3AD^+LZt5A]9$N+N* ltE1UDj. *vbmfŒ%p'go~ G˳qUqw-?UAեxs?ueMѴMYQGA+h֛ۿ2[//ř>ŷۿ{С G߯g}g%n&MP}g/q9?岱~n@ۙŌ?.u\R \ݺ֩l|Mu|%K]m[&X*:h/ޤ;u[!VӪd/}ܰJdtcFeNα/n`s_0( #&y_t+UaWŰGzu?_zbחU[HM.~rg_}'<D;7 DžT}QU߸mMi.@~W7.WشY|_oQ޾fU {r.W?_Yc?{r}Wcr>? V-<usynw ;}O~qn6;ك{dy2n+ڶS[E1??;`6u*;cmV]UA tԴ*86-nZcn9/^/wZxg#e]7guNU5˗q{릲 LvVǪi6!Y0S5%N0R(_XJoٻ6c K A&^xc7!)1|ވ90ٛ õdМM 1=[-o]05#2}f<ȡv٫nP˽͈aRF-Fcof tİ'繰@&qdR9ùtjXH0֔n7 CR(Ŝ(aHFܙ8x)Fu߀1'XZxv JZ_V{h:ݢ ֤a"z/JN`噺%,u;. }i>"lV>[x 4TfXl`F&83&j1*+kJhPmo!L{IR-MTt9b"{}<49Kh>{4U{jm҃x H?đ^`B;!Y!vf babtz9OϘ_g_-+kcZWޗCy*, $r<cn$0mtcq1,NN猩TARSLI$IH'eZ9c a@ s SLu+{0!S ҎY rDX`x8z<v&Ξr ]++$XWa@d0mm"-է[nNV[eF3MgQ>(^ e e:*X|ߺ_U~/HQOkpr$AXXPvm@w]U#2\*밊%%V!R.!خ4绷t('˜*`ݕ*S䅧Z5b`5cdcGdcPD־tm@[ /fgL@̵Iiγrly̧],v X̵Q*g er+gI19-r C:xpϺϱ"DN"4HA I%KEzxŁb9F"N[cR;ĬXGjdY "!w [,l^ٍ.HGhP& gRY,]6٧0cgZ=KhxBGτs\w Ce \h>vR/%&9CF30Vf'x80:|\z{=r$LG3Y |+ec.OJycZgJ}~o<BvȬFh9*WQ|!U$qra PAJ`JIi K%H,9t2MXi#&E 6V4wF-HIw0:z+qv[F d46Wk_u<[8E/|&f('݇ƭɋHsȱ أ;1s<1cezp P@5"YD5戍WDjM1j;S}Г!D!e!x0k5Y1佖豉 iwv&Ξ̱ZX&-Q ”Q*3ᵒ;ͬ^Gloz;< 3x3zQu _h<+QkBnm:rW m&UZԴb_k?e9.JUp.Ȭ8GJP2uj zwW^jWJnׇDknw,/ke-z; PͦM}ZR2O9?wV07&nΩVsn^ռJf3=+ƙ\H+qn09CP֫I$Ύczt$Igq+ !ObM#KxC4 LjxD]ʋ#)#f`:B+7aiN;"ha[0 ^1IL=u:{A_dta?wǬV}⊓ ܞMLb*rqĜ3UuM[3OE 67Q%rI-s[\4X{fl^ء6~ͦ Оf>}QGRvӸLxH>f0[mSn4%XL]_l& ZBVR& 1GM,+&ڐ>ㄷ+WGAϙ,ž О(3Qɉ Sl1~}p~1(jf6<3Oi?ey*baˠl.X^ژݎ擅lo>> -kv:R3]]ʫQEǸ/c`bҹVpbXNePmy(ᾯX%P60A0ɵ r B($<3c<%*j$_NھS)Y§tw^ r{ 3bC e/C͠\aY˲a˵{6*F$?set#lOѾ'=xyKEt?_ ]-Tz8R?a[{'fTKM9 ku?7]%n^3 523_30)1I )Y VIQȢDʝBoW=un}iOTSayH8I3.H@# k}xH]6n*ʟėPstjb\T;0P,QNYA4׹ηaDp- 1Q"i 2O}J\k4N)})٬b\xȟR|~Ndsm![6.jg#V.Cs[7'j=I49"#308o=r)hf.L4Jmn@"*#ٟ RH2sY-b)UR*&XL-ʲ*Ű Ya, %μ|H/72MՃ?mP`t;?}'Fld5< (E#l.g$9EKS!84jΨp_6%/<C6l`;8˰ɥ *!FD&KFg3b0,ǂVDZ6PcV<Prh ^P>gbLXdId1&!Yj|F 6 22C hE{bM.Ɗ0%G(D l8ٌQKˏ9Ӱ,Xl}슈0";D\[,, ((&p@3QE4EI   3 QiNP)!e2 $( X8*hI3P* >YJ͈x{qd$:]qQEpu+[A+ʐXPjp՞0 )ǂVǮx( a+5( ~|&GDj{?>Ɯ5.l0hOքVbuNrzGH9љefGAdޓN$FTzFDrO!/wF|%rf\)b) 2DytzS:& f1i0[cbާʦrp~ΕZ#WG =ZGNBqBlAwwC"Q3U2VSА+rs^U o* y$&q%l, TA|hL$1\&(ř@L PU s,PK g\.cBvD9oܢ"wגF/c\]\y~nyj0y#c`фX R!\ A DQI(U$ZT%{5Q23__zZC<.Zbb)kH $ 2BIFQ\F=ڊ ;)<[?b)}qg<gFz¸ҒRiAH8'' ,RI"FU{ J:/wҷ(}jѢ!MB=n0#,ėJxV.jV_*aIh'&ᣈx2OsJR^>;z%yt$ n㏇]\ӜONJ\Pєp` F[-c:W]kc7 ElvSFOԡ뺢մVӬ_ %.!~ú`W:6..,=hZmޏRrٵwM%٪ֹ&;+WS[ +W8([}|]4D%YI kZ \ӱ嚔-W ip]a Wmc-1Zo 'po:[WFV۲MԄXͼ[iyyo7wmvw2]&zӗցA33y`ţN##: QԠ;TVmIs7ꮦ6CI.Rr&R2dXI`"X(wS YmVB I;(rH=92oxy8 \sn%cBb!jpC4DPs w"drT)[e}&hۖZݶ,/5O9f/iJ; OJvpF%dM%ѵ[=qoUޞjT:@?DQE[#!~JC#rN7OoA˫,~7g=ܠjlhokћvfvP{=`6Qo{lp27޻MO*n6Q/Waz]8f֝j}݌^Gǽٿwz RуU }{1'_FB/u{0Ƞe8kdelHUfH<P]=fیңW.v /Vsq"HZS 1 /-_$с>i@K4DfvcqʄX/cnsAf-`)k^ae>bmݽFA]6ӹiq8YUwE}Zah!SmfZ>VBHࡺ_ ReήJ, 8 Q*5"VteZPg.*"hFZ:ģ&,NGlzJe(RݛV PpJ+W3[IJ¿RP)~!wziDk?6I=0yTZ( ~QR/>?vof^߳810믾|eCC~FÔoU'$M }l7دs/57՜U4BUfO}vݑiI=!6dDWSaYi*cf(%%c_!ZmN6Ut,W(4WW  dq<Ҋ?R.Vwp0  d*UVc,傁WRjcX:z;jx%~Y |.%Iy[T}H6d4B Қ*xP@Q k|/kЧtϗU@蝙q}}~3eAO_x<ЫO6op[p@ƟYJ |Ks~20E޶蠥z:Mv)`c6ؿ:^ɾ#}Ə.CQ_VPUW\o}D&ރT CwjjW8Leև[;cdkg>VئPF_.z9 J%w_zhuJ6tV!©,'YJ}_ʕ,D TM-W5wF+t$\U@"Ѩ[R9nMukyIS'dO͗b.ͱcGĎirӟݡQxG>6~ ǭ-@sJnRl%P!Z:zte:xGuF@;R92Ps 6gО ZRT(R({`'RzIJD9.0ǃHK=02q5ă2L,h5T"=&b 3bly&7vWW_8S wqs#׮_v33U#tPٛi|OL*8MAk]n8֯_VJK*U؉O\q\*D+`$͸XΓg/ @bE|LOB,()Xa%qp]LsN"h (,*GT5ЩC&wD.u< x^lϻ/ZcST,ɳJ-'YVP p{6ͧ7U8S٫3y[\Z+ ܖP_ګޅ< Ɏۅ\HvD.$;RxC%oF+ƈQ%8p,nj>+RNq`qN; 5PA[ ނ& 3Bs4f&acl?9p{V>vfяݧ4.P߽4Vʫ46mFNJ`ՑegR̾WB.p{Ү5ʯc c [CɨAI-sL+LuYw+wd{ 8􋯗;`5YM6DW[˲W46gb>yFrرZʄ2a^!X&HSE57V"FMsFV;|;^$/ܜ܂3e "+]m< D`02Uc}󃔊[Ƣc\g8X2tie$X Bաܗ}}ڇ拜W'y>%p\(wp)BSϘ8P=ty4‚ӔdƇb02-Ө+6 d3QpOa)u$pPކb8N/èC4H^ØSObi ٻ]M_GFr/Mr'?w0FuwI+9Ik$E!t+I/oצ%^erSz @֌ Q_\-aG:hI9s2s2*LοG:,Gg j[-)%vZoӬ)ց%[t+GFSg0VJ% QsGh`˕QZar¸ h809&0G Em#"(ݐJ))$d˦@ri#f) gerzI˹> "YFn#~& Co<T*n|dҨ(Jl:Ü~zO/{zy(RTPRK5FHDQMH`xIZa)J/e&#QHYu2LwZAW *ye4zlvl4BZ"2lqD%|Gd<{n4(`d2qgmzY7y([UV2 n-QxwɼiVisog%76K;T$"YtIN@ bxYt O˺fP2nJ ,dXb7U=ɷ owTr=ڻ oyn_o'uGnnSxt7$]fm]߮慻-!&n=7&ߜ}sk ^=U+GJ1-m@T2eX.eGӔáVUdHXm`RH\4FK}H(ջ]>&O^H6ᎃ`;'1h)'L*B_5pfJ:r 8ő -,vGEwo> B'rroDK5M@K#gg('Pq.P|++|1;|0ߴ' _&Ҭ=uQ"@uaX:O!hC3 rYQ(֢? R!NiW12#(#3r#c6uv84WɆ,c_,$XxP,\Y͙_&f|^гk2(@ip􉮞E<(1Ih#VpEs0lpQw*TCr'ZK(IItY$.DKm#b jgcAmңv`7Y$hp_2/\Hx 3 &HG(`#Nu&w~llʩON)hp,>EDi="n<1#r1LhOq[ǽ1 A213',V!`8XG!1pz ",iP΁'MFbR: s4\K4יJEY=.nxgcJb0g`k%vAG1 pe\Ca6Ef< ,cxX7я(BOB_bOUg^NskZ|)T ;&1b!A=#zǼO Na/y8bRW;?j5Rj^zF?A'W1u]vMdޖۼgiiPMNSz=-RODBظCbc1Ӝ C(ˈjx.MAdwO聡"2Ȩ !K3i9`))"#ܤ%T8/ y :X{))9)wK9 Q M#Bx{#sjҫ0yw>Yב:a؞M[Yy]u|lf_84Ax06z/Hp K gju\)$])V h9#xHCU %rI|;*\W!.ӤvH^~H;Hh_tTkwHT$qsw WhOuzDSh;wJ&RGnho{.rU_]r.?{GڡJ4UW2OoøWǓIXŷ5[]~qA)|)V`YvW͟SCJT|8PrPlfrۆNo߽OUhvBh)ޏ1)\L)b?,^5 _Bfz??Ϸfo T"*F*)p@Mw:,]jv@{ RT Uz|k\CX|}9g0ֶ○(K!mTk[RnWPx޴N;k[34#WJX>p@)oW _?)8 xzv ž?zݚ'ף >V]J7SƠ\8u㋋`~<ϩ`-֖Fo]nF.hkYGh1Uu][o o֥ pZZwEok߀%,zK{- rq¤_5Ͽb_a ҅ו5VLǥh~ۻwoWc \ Pik`]Qr=mlfwJjr{|}CL=sɍ6T <.ؒ~tL^wo)x`] 逊v-A[m$&ffVfC0ZKIM͵ ۓ?GR6`-5SJ{+!½QSk)6LZé摴+OgYGY(F3EMdϑ'cp1BՎF""蹚ZlPa1s(2[Lzz]<+ga~99l'` 8cƢ ;2fqRDϓ(C4=QF(~`FP7;]̽W 0^!kIQI)%3ZXd5cM֌V#. k i>fا"i?C!fH`28R$#Z{1l'|hZ68"?Zk ~ m(4<9K5^L<)է+g]M_GFQJSlvY3kioҪRSNs nADv Wkڊau=z.%W լ*ݒ1cWw&{BHbf.#T.-\7R}pC>]N wzXۖ cQ{ɁL2@JF/tJÃ&G2|ږWg/)!{[;vֻ}wܾu L K;FVtj_l*"g0THCKK)/{WFp_۶Yd|evw|%%qWlXղ"vY&Cb\R@)MR֒{L V6M%F%&\D]Wj'|uP ب&9G6G|Q ~U`'\(*+dY5Vn1GLCeEd@/1me0vxޚZ!vh gy=y%~#7ëi=]sI^Y &j0XUqcD;+Wy=كլ9()Q>eD|uh 20{>gF=:p<SB9VH \J}),Y[TƬek {/J3F<0z셺t>fǠ&mlL_-&{JD)Z>W:'Vx}AQZ@O+\s .Hl|;wl|\{^˖f|;C[j;8+|tVyg)żIsNҚ`mGEF1R AHåRY=8 S%'\-hYl;vklkM;ߖ~ePb0T4Гm7h,.?]2O7_.m^nx-wnhYUJ:!u-t(yԤЊch'JZ"UĔJ#bIc1GS1Uj!h^ef,T @ZP1o1DZFDr4"C#@zBu¸ou{FiE%. ֭BDsDsBUSA*V#2 NN!wWEf/55m3&KRNBqF@AN{4Y,QT-}bբH]4R;s[ι]&279#텶8JqGTE̬PRn)",͐=/3d4]J.YJPEY@;Qg8^~krܸ@bf=a)1ZjNC-Lo< 8O99fg "h>0-k>r( 2T;:JM&_M/҅ŒCU9X.0K*/u`ڤ4;zn^zi44A}\>]Lq ֫SnpK}m^^NNWǫU\[\̳GaEO`inh8#]?J?i~`v{YZ_%ϮZ_xw3=~=5F3{c.ݗ(_ OfVg82GL"~Bƚj25\_eS5ɛY,RNkDOpxtqxГ:o͵.TkS[ W,k#;u_j74AqplGxR^yO U'6.ꌺ}ϿR~~xO~ ?;qDO`Bo2a"a/Ba^{VM-ڴߢ^m6ާ&ðr$o7_ލI(qCw )zǣN#5 *zq2pR"GVI*\څl@LA z棖$_@c:<{^#AgӾ (3!4&3KRI{3"Q>PȜ:km/ذpyU=zvia:hQ/FEO\<2[ <}b>EfѹJ]Vp*%y\uTcٔM8|c]mRY%L[)(kU22dB%og?:4ƮLР{nys Z\n 5 7! Ex;\JGp1Cpg=36TRNs B;se1"OU^D#}zMfA3m,XIlP FUNH0,&PUʭ.b& cCA>i-L1`FE+=蘸Ʒٚ9'鍾T+^э5x3իҲ[k'xZ2RVg'Ǟ̽c;.]8a~B$=|ކQFai69[9@WɌ e,Yatzu[g'u蕝؁Wv}Q ļ%>} 5IJ zC\`JeKLa(|:=6A蹺~ucGf  =z!Vfz6g+Q?c>~IM㤐9k,ER Q&f)s=DCh949P:p+IVg͞Y6%= 99A G*j˜GZxvtݼM)j~W򕘥T֙e튫YWϧ !DrPyQH+E RA~MO!D!D($vBb7(:HJ 9<:nqŜ "2J/Oٻnh# ; Gd\ `}D2zjʡSVreښ9[l>r '9)BZiϣ+78e^(cGy!˯=mƂ c*+Uj(U[;!T{z {ع؜w &lp3|8 gP]|ic:"p(^ykLF̐{͏Yxc,e+ 58/RZŬrJ%GJvs ) !]֥Ȳ&Ae lc|5Ƅ̽kml͜CWE)4d.bںuϧg=eǏt~mk#76);j޶R깙7BIU_fwo.hL{6/\ YHwVtg5>) -J^ss(͛[\ݼexm<<ܺW8FJ7g/[^_B 9E]nG~ ԗ`lF1J\S$ËJ:/pTp(J5hgbĖfzkꩮMħp[ϴ( }sj+Q]ʉc.{t!%K /5Z6(L]Tw|.;粫rAYSrq0[0P5$ 3Rir2@D2:ZfҀV^\:Gimp 0b% E4eD1.@)E:xWlx:FY7 qěOM[[!*%x 6'uSTޞ]"zirDFf8a$2pz&-z f.L4Jmri"*#|H!qz TH&]AVE$FrKF١rgJgXg싅c,=>*^3_dx3I7g3̠y'zbĖщI3R4j7F]Q4%JZ)=~''iΞ pg6T^3A%D3Wَn< k.桠vgq(z-ػ8 Fu(T9I4/(XSQ2&,pR2Fd1&!Yj}Fn-$52ZЋ<*Ě\ )`K4u!@<:QٮwFv<\z:ȩGCAc_D#"G;+f$a1(a"sd4J: ',֔PpjT,eqy QFI{(nӱt+r#Yĥv&r:;}qQtbwqԃV `'G T唧SE$($%xOx(xw싇c<#@Xݟg* >k璻~2㯕Tkt]TԵ$*`KK -E %)x#=sOn8qI Os*=#"9hO!Ý&@mf\)b) 2Dp)sF>/̐'g&Umt}cOea~nN!+Dž:F@=ZG>4*g!l(yfuz`x:`H$jJjr`|sxJi@C$͑U1&8(IaʝQ1xΒj} ;#gb#.CBv@dKq?4wג:c؞M|X{9/5>e0́G< ,@BbA)"QHMg{Dxt~ziy>7qDK!z>A$rARF(h9ˊRO"y:Al[?|zHgFO^ JKJUL#p76H9ǿ$QƋYu0ZH)8*M꥾3XEX-2хᐍmD(ڳ6 v'#{ lt ̺W CHB?1 Eې4$ sWrGGx6_a0OhIi M W pjH"9 *nPi ե;18H&MqfGaXG0?sP#=יۗ͊w*WiȡA;7U4S`m@oFrٷwKw %٩ǻsM$Wn)vmgor>Li1ch L?q`qMP櫭_~E[Jy{[ÔWC2O/BdR-Vmm+ZiH([{]\w[s}ٵ2M;;ʁ ̮S_rծ/ߍSO!LmsKp l*ގ_?0G~U'͵_×`#;=E_\0;wvRuxWYb>EY@t?dRC.f/U$p/JmHh効@r1Λ5ŋIV߶o -_b8?)f/o%X./W)6+cڑI $ xu^ޛS#Hzh]M+ozr|1kQv΁~|~ Zي[o]).Q|bv9E0f1VHꦵ*N e7wF "ܟ6iW +iyC< > |@-/4hn̈́0WZ)Wעo^]t˪243b B.6;ӭ\NIoxsC̒9{FwL G&a~BcK;.o0d:.u[ )ZTbqΰMMx=fl,wVr"Mnwy3]lln/gP߭x*wWI.R hmx/%g`u9N-(g!X ('~勉 /`"s%cBb!jpC̵VL%p'K,IbMQփ^'9Чr$`7>څU1c-3n8J9s }I!"w2E {:)j,S';Țe)9kdilHeV|>kƆ;?m`Ywʄzj [D3K^j?Y˵pAb"Z24q0;c$:'-hڅȬӮoo\mVAn.gշz0Tb1g8*.+ xTןIPۖ^>+Ӿ)Nn_(Oۻ_-*u8vlܭ̷kWmea+z&K RVK/y2q:[[?D?V1\u mf)7y"~mg bV<:ߏQ&>aji|'߷ABmcc8J!$HPΧaG4A 4&]Xd%pJ^kfE4 }Gy)B|:H^V/n osZTi>Jh&յE9x8Wׁ)]j"u qfKI SS4"CL)òM3)d Y QbjI0i%OBJ&:)XJ6"KY @L渡 [{B.6竨ȴO0c~O <3"2B ĢkQ(oۤi o'»>ٝTz[b֛<] osY< m<#4vhMsx0gv% G&(aY2y YJo'=q`z^zGj*ͺjD{" ƥBst=7&-)CK^˅sax1$SIIK'Եa9 (tHc n};_|yZwoMo=(٬{Li > eDeP]*>=E*V0K>Qo \oOS=EK}њޏZ 8]N8b+@ $7h*8n 8" 2FhYmC*0  ]c`̷*)U |)%wIs{{u!n>ڭpu>ܲ,׿(,P鬺ܩ$Y 򹎚J2^) :U)eC'‡h' -2_uP6nƌ[UO\HdJ <(x[4 ItZV*hQDO%@BqNJqO(1u*$& cLtSΤwP+Ȉ-i@ո !سNsS](X.ʉVhqϹ^EhG83>Qni 1嬴hsOYu zS4ʠE96#zs&Y[3Hχͯg<)Ԡ>K*  q?S֭نU|8sR*2X,qne$HRp0Na|fQ@ސMѩSlh6G4fhSg|q%|5kb֙=% 7.~{1OB4Rjw<2"UѿnD0yv553=nXVj&<=={‡|DK6ds3-%e2ܸwn XC_m0ymrLw6_'|`woJxwFu 켧,% :WA0= 8m?VZI!hm}*GeԖ>\p}(u wZ6I^9I 4 x`4 (w@>R(5^4BW 2#^'3ܐjT9 t;*0A2$l-M*#}4XN꨻սZls Et Զ4XE mœ>Ǩ[/I(i%3%dy!@0cWR7cYD"Gps<2ɝbCkmedηhIdA ߿VDn(GO^mia\H~(m)"X#C蹲jRA }+WM52Qp(&dfaw TC5h&C:UB.gպ[#^E8qEGsx Pͯ}=+yGNΆ%1Yt2ȍߑՁ&G7:ɝ]|=xjvq>N/JnkߏO/Ζ.͉1e`~ QmBesro//0k}`aK,5,XǗiobbx~4Xj?ZL0 |6;k ޚbxxwCW4htN.XIHO'>C"=..aZ.YyՓGi|Ouƴ70\J]9Tk{3,r.|I[1ttD8XIꪹ e`dof,`U~> H iNʵC.2 =+zS~2х[pri% 3Svu,Z b~dzkx/v/?iXbTE)*:+N+RATz\ܣ{ev(B-~Ϧqq}&HBO$4EMLS t[(g6NF }^m73ev753Iatfދ:6q;XֆɔM̤C閁nl;NݐRPQ# Gyv|A8ZS'nEcl@q6dw+X)L{% њ*AIDz.V!M P<݆ܔJ)#sN z0! O*3\ ^&-:P^(S6Z=qV(!k:Y7v޼z0ЛWlv^Wue1w`'`++OlB>B.S[)juB=v֪@אxލhE8>\[݌o~VԨM^5tV5k {@Ugar~tg8xjh|׾!jÉ{Ce63Qo[3bV'p]5^-/O/v1Ӡ^,#ߧ{#)8M,9 >MՂudOӏ5b}վlw=vw{U2 f-w(Q Ǖ}х%r:ok| m_l{Cic`iJ~IP!b||6.$,hXlc t7>ZkeUxެͪf沓_5Qkޤz$l*fR]ÕW_ǿW0RÔi09h<(΄ߒ7cJ-8 tUoVƒ[AڔBR34pndo]b:aTP ǜm,qtG.K1u)H'OaKYW̍o/eV̴'0~Pe%U;"250\6"[Exg p6-ܡ&(E+7+VOg}7X_x~lpϤqC.ip$WjU0t7yѰL? ̣{x5TU>^7IYDr[ɀssFiֹlSL{1V5@J.+Ȳ^gmmF2f(`D.ƘZiaT͆u-ԥ1;&U}[_|vZǃA{gY>EubH5ֶ+'y̎k `d(sC>#k0rh|bRٰS**UsUt% hoBd8 /)e+Q!9ʄ^:4ST5 uJ.%%0xmJ#rEf1Z9]U+^ʆa5rr#;E,Č\ļik2ɪN2,LdeLϲ.A*G*sEmH2%/D!'!$CΔkL,,ݸ[@K%z3}kS?;:jVOfXD+܈SDY{MVʛ7qm Ϟ$,0@\aN& Fœ26RnuI)s{fE \I阭ѥhrFW ARъKtTmX͚RMV]u y H;(vu_ɧI8Ni Ǔ\ce2R%*ƃ&+JbRCq\  B0jaΤ#i69mjF"gbvEkW㎮hmkނLh:O͵W >r)t)%81J#UYZY*MFf}:Ǯ*kD5bo5*ɍI1>%+dF-dƣB$W3D6+!j}0TR\C2!RbFX XY#V#gDړ^a%EYY/b{x!(j5سI Hڄ׋Ћ]чոcW}*C>܃ P[bZ/]#7F6xcgAצ>N(ڄ-G~K@d Z"[ %D5`2Փ+yY?)7G[,bZ lG e0t5Yt(Y`:ghv5;jZr6g)ż/()(A$ F.r띐KP«3|",\eH!!}_PdEQa?zލ"4KrC+nC0Np QOJ!Gď?Ɓ3vN+M&?kkn+7oycƥ]ɤvj+Tv].\-eMm^%4u$Q6,Y<@h4htL^̟-~;#r{U"Q|7¶q-m?',Ѵ`/>4CfOfp[#ҾkЛ7?j%cii_O'^oq7;"#eӮ?:qV>`TmD~pMInQmOgޮX٬ovmsd0~{/~Ͽ?H~헟x4Ki,$^yUMV-VyQ>QCm-N ~Qy8fsժɌxXW5ף4HFR팿5>xi.(7)Ycc4=%6iЉ@r9KJO))ܢTnQʣsVRG䬤""C2ZWDQB@!1K4c /LTb \D!`. \HL4H2z(0QaA=ٓ]jhRO[4gy 2d wy)x /'K-3?ۓ O)Hs8ADʑ`k]6hV"z er DycG4Ǯ}t!`ټt|@@gJAS3Dխ@eΈ!\(tM0lȓ z=IΖa H32?Xcd!O)58Q#ңr!\M{6Rࡨ-J_ " T $,sr\;^_FVRYo?7A8u]2onn/2OkEy ;:ztDBiT'SW $<|15x<l,& O.Z3[̈́'xcfU?p=nVuqDYQJ>*fu5k[BL-z˞1y1mYʝ|s7_x[u~ qmcw#C^L<=a_|zJ_c]qZi=SA%m0$R*Uuƥ:daIb/¤ɓ`+EWHgIMARnQ0ԡF!i\'=hM 8˗{bpa;HoK6\gϣvO7<w  +Ǻ0}W I%~8`-?Xy~/2NMD=NBPhN157IcT*SGS}g VG1'K}fK5>w q{<QhCOyeڃGTD|L>BᔌN{h2`i0 jk2Wlv'f"XP{_y"~p&1ymq$=AVs! 0Z9ʷ㗮~b큲-IqYr#2&8OyPCsE<LH^r`pɭݿci4uD_7xq߆1n?5 "T4⺾~bͭq^n/Uzm{*;ݶXOP﹠u^PZzZI\ER*Wi{ZeRNsJU& RB4&^7;1x?^οq!@ vꮏlQYD;O'#uL“)Y (UF >h甅ɚxG-Q[ ѨoV)}TsEP/PA0AdDdCιXdIkP(-tsBӭ [[.L>V_N]Oj=#lj?#Bx,ٯftv~1^:45Si+=hO7N/^2D tTI9z 7,B'GB KkU^ڜ}F (THч@$%0:Aeoq5JnW7mĎ7a#F/+&kxⷞQrhsy4_4U؝2+/;=e㯻]:[E܆&j.}Bۋ~rW,b(voo79(tvr> {-lf{zyw=J^{s>;-n?2\y~V(xkk/^7XGy;OrzryOqj娸N3[i_No>/S$BL.$hB0ACJ4QHzX$::GD8sd3EQ U!3%,u)1*$4]_b*\kEG2lZX =pƣ7ɨ3IM=1t^ZK$)}9ޕ$y9ޕ$zW4]xWRп[zz%xmU]EZ~]3'1"-fK{ tuQW ^]VR^~^-X|yƍd:Zg(25ƵbiPjms2G^;j{$圛bgjl[w1[Sר+q ՖtsmQdi VHm;57Vu Js{>[:q*ôg[/n?5Jc0{ ~ղn#+o}K%WJjW2K$@hht1΃֚t{$gX?\Hp_7v>u!.z#(A10,ˍVeN u 4'i#׆] de% ܅R' s5!p XQS7Z;p] Z*nS X)E$VQWIJֲX6K_D@cN;DvJ@HOip(-wSD U!U.uLϢdљE#)-P:BsUKХ]"$y20) 63,:Q!RR`DȎJ c5C lS'&CЊn3ۚw&a-5+/z-J$k^B-Ow`s!fhUZT%R6. K };z]$4S^o7Y8|kXƚky`+IkB@&+md{bmD1ꬊ`EL>\vNd^Jģ-"'1l AR@tV?@]s_K ק=#`&itwu +pڳ H^Bd30/6e^]R6RxeৌhldhEKXSXOh.I2̈2c Xc̡L c;r7pZ" e{UH=֐wXW &ծn. bAɳj*B$,Oj *E Y*:Y%ql2e]dX VhˋR:kx5L j-CQT ͉[< xc\.ErOa -XtRBeVWɚ X3f (0d(ɱec6iB:A )l6 1t_D2%FB HixHk3C5Fo̻\!qfd4 a\ocX)lX7 suq72l,LČ޵qdٿB lIK~Lqf ն&2!)`j>DKDR-Yc'dV}S]ʦkZT66% jA[ZT&[n_ddKΨDC@(m4`U=ERP6  ~"RwDbkۘV23%e LQ$"/=O(@H*+9JO!BIѧUV  ѩ-C^b-jI`̵댉pȀg:c Gú^K3f(N2ƀ/HO@; B u@0- |g;6Y.ʛ5GVV5|e:՝-8M<"`^x #/m*-\g%^E Hɭ  -*=``>\qLqΠ&j z5.5F4DL iY(}R(T)pZfc|Oyq `}Q@VǃV( e(mG Vg{ESU_m4@EUĝfE <-`9%^V0No6Vx`qP'SaA[w_XWXkp`(c#P. mu-+ 7Db6y.e}t-@F6&b,Ck6yЎ2*vp0tЫP%l@U9{Ō:hXlK4N`8(@M"uR82Ggu]ƢNI`&Ybu4 ۽E2w1`DhJc<9y}p^8,*2N$Jh6'7tX^gH{Ytg&0 ֨H `f)T[ jTrV-[fy/z;[X@KPz039l@E}u\$UBe!KֵhLl8Hcy^ڭNF0:MΚa<^NFò̹2%=MZ`8FL-FTaMh XeVn Iwy.Tds7SNz~y4GŒ>Vo0$^U.2R ; #*|DHz;[n1V[*52'ѳU.ÃX4S"qFd%q<{qzcvN^}6DK&".ńE@H"r S((r *\cS=WTFշ+z3!z cTvԫ.n؎h"+K0G+EMFHCRWJJh4~ߦ&5uiaXhfW5cOTBz';*I\@'Vdb*vVOJ[ )H XJ j?#%a*qN ]CJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H @uS[}8J (d>y%zJ I DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@zJ I d{CvzS\aF k5{J Xi,){R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)sPCR!qs8J +(`~g@P 9!%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R(n.si?^Rny}Y]\?_P ;'u4Np.$\0%k?RVɧ.\VZE¥ \6!+,?tUPUVN\{Jsi2T@jʼJywY b\yO Wh:2otyhةF;2:Ιw e&aC{D0AfLA6 \| A;dYz[ү n!By sJv0kL~=͑¿\e isA_7ַ_ 4>䶩*$gfɽ&.u8Vs>4QOVXə'4 ф<*8o?tD\?YOWJEaVnp2L`2:/U!>mQu߮p6*1D}ӊQ"⚻9jҾy5oVFa Qشh~~wOuޅ-oƟX}{ceu9У]Not`OM>:GrUe\}W aQ!K쐀Tpu~x'v.דU?:]xQo:+}w6aSsy7uIsxuş:VL; y^寝|Jɗǃog}%T_닅.m%^WZX!TUOoPOmpC?05lR-|#Xw#7߆zێElK7 _V/{(V+<{ W.X"Lߖ9f0jW"즓Dk*몞sת}J^?{DP?zƔR® {w m)~%s Wl;:1?«;f'w3XO<;NfKL7Nh%Z+~vrv-N5N=b&>j&"v~D ZWy'>[SE4xUA_i6<ڜ\6A7Osݱ[5y23oN٬cqqݘͦux46)ParzzmMw+=Lz1.a\+G0..]D8Y|{rz y&w8I@'[IxLlʙƵ6a\}NִQ\l:a!GWX]֟{;#~zn5fp3͊7~G ]2mnqf,6ջ5F'f>,X6XP?mv@h6kF ԾT,0L}=ddj[_"+\γγw>Q&6']d"ȑtRm>NV'Ch %6F$ڜ{Rk.ԡ>djmw瞷F)S+b2_{3nz= =!"~Dvcnmx3g^Aivz7\uhqEh"H 1T z> AWXv=4~}2SmBX2M,L6.2wC}vl-h>}뇫/lOS?0IXq;lh>fsKEFBjme. m)zNtܗ2b\Er<1z {3n`ء[SSf??Rmzc{S_-w?,^s㎹0ǜ/5jgf~3$28X gwmU)"?l'Yqή!bD Y ';s%qHJ3@l4kꮋ'L*b8RB3%2&GY ƣia s!,$S[)r= joqVj񠖂lǹgyL@<7M)1 G`d#;b/^gPۥSήΧ.(IzkV[<~enMOyZ4qzb}3 pmĠ(*-h YgAa5SA(a+{+2 %ZRH#'8&}ԖxA Ɯ 72FzdOW {mxbaKQEҌ;scȏg- 7ݓ_5dzrA~-)'1 1-vĪ(XqNAQ1lFӐ= JlR!`V0/2F @t:K#R}9Ns`b jd@A]a3K+bq̌sBNA$h0__ 6r!a`ヅ !f`E;$`) rG(x`#Fu􌇽KqM/lgD"C:b&О20{cH3&<2b5AKaec:_E+Z<嬍Bb%#1)DXҠK:K{F $R[bT >u%"ـ.xgcJb:a4:(IV:) bbڸ逋}޸c[<=!wam 򾊫b=rigq ~~Rqs%Pwe^\Q~cFQ:fb>.s {;.3yfCU5 !TPrKf<\f<=RqqӘu (僩WK|m^fU{e6AF!Q_hT떺ۯ^mVA#.;.P`7ߵӄ 8?Q,|X,&Rx$[uh ҽBLG d(_Q+7#Ƅ);&1b!AE@={Y#ҵ=Dv9dy u4df&FBT:#GK5R;SNJ`5F6xKMeE@^P@%G"`"(#"Y #(H8H|9F|̬q2rSڧ-] tQUpnE^+yOLJu!&ޫ}iRan;Uw@OAJK1** qZl,fS$hkFqN DH\ 0<0DAFmV CFSEeLZoR蔔"#ܤ0!e/Y/s>!#tx۔"Mޝcnyo%v@3;(z8@dgsnx RDPPU Y͜++''~NKC d!!S<19Pȝ@r,\ P\;ĕ~H$uݸI' 89>{GJ%0F% RD]4NO#Ԏ@39,:Rߛ_B!7UTCqH:mٍza{ٓ'^; /k]#g|Ӭ:K^*,QƵSN7~Tv/jn/^{?ӻ`yͳُ`)ipv>Iyy>k}9ПyCqty}PbJKÆT£h7Hf{S6h|AT-PQ0:y0N>do.Ɠy"§3<.TE|b2?f7t,zi$xcuKdj5:?bBC0IUOg_^SUr0}<}]׬˼nt`w9pM.T=\jJQ>dԲ5dlgio*͍U] uZeGj[tzƂzZFZrY-K.^'R9 Yː3F?=ߖC^d&O\ዒLp} *[_ S4oAvWn.9TmX"OJB` 7\.UW+a]wy إo+7,khcFP#nbW}]Kɻ^]7qv|53I*(샺ʊ|[8>U4\qG GHĉk3uѝwEٺz},/yaEHc wLR?ބ6vLPj={l;6~3{L{CQ$3|bV Ԙe~d} <۞,ìumuM]BfJiϜpEI7*bj-ņIk8-K2\ᴦCbϼؗ][n4=2F(ڱ(ADF CAb2^/8ŏ"˳Ho| 8JVnJ)!ӤXD[`^n>yM\! &uD#J1HGi{s19X?.REh&tڥjX.|׮c{1T)b+tMGjd^ wE%@US7MG*Dm0$,]aK!L˓gsh6K|uS{e7 \ Fߵ˼CJ/ZvʮnoɬgUZZP֪u[Swa-5vдHb\(Vg+]o/Cx)!]|ZV2UZVguK*|fzTV:TORUj4F'JzOMg>@=[#MD ?xV3Yճ;S0cx}8ήFd;')鏩 ,<u_=ZnmTQ<3 D>LJ)ڤANUT6UԳbJ=]]' -кMmTNfw_h?_z!$l6ִ u󟣊j#0fHXG"\JlB Ozve#שR5}W鍜 h7>p-l8%Jn[ |et87"]%CZM$eѣMr$`-̤K90.c25u88H:T/zO՜_ux0jBnmMZ[*R")9IE$s2JaQiƝVc  B' Lw +%C~ y} ºQ^##@YBKEP Ab"z =C(XNUI?X[&%XI$-N PШT ')1Y ra yNt~iZZ# ܚB#97(RA < A1!g厶]mWrԛ6Q0octQ2I^G1z;>lUv%\K tJr9~}vbr+yNl< vYpf/֭I:. ; t~ !L98GSw`"A1<;=5*BY2:mqkr,%6(Zr2?275>Upq!\WͿJ:::.>UR+Nde}6jC64y6 ]>qGJU@uV5O7ͻIbv&|q2xR-sp8yyدI_1FR#1|aH0t,C^OK|Lvi>j&s|68;G%hGvQtkIs`o2EilMkө\ _y:s`ǿw_}o~wǘov ̃K]$`~z?;C km2 `\qV4fǮrKCŧ׹j\w۔^zx׹| )'Nŝ-%U\T%U̗ȇtQ\{cOIWIWnF; a$žGhc<-XNL$H (P=j,1*Ĉ)dG0p%3C쎊ň ;g%GJ'gyRJe' @2,VldN$Z +0"Ru)=JUVg ysf.{ۅ#:al4uif#>bؓ\XL`;ù~[5J_j PN^Ew'H{ʉS9[lWl3N`c1=>ܛp6G?5iD 6CF ffJW㬟:tuu3+`;תsKǏ{rmH^Q3e2Kb~ '~mo-bnrSh=''[*yrXVV, sS\rsɃEݔ1RCrq$Dnzc]R+9Wf[$nƑH^$8bU@T8O=>^]*se?Mt$IE &pSHQ'L">81%B/!Z+7Nni1Oc{e ^)\)Ȱ yb$L}0O];truD6Ic>+={Q"o ]Gb4Vw17M#׍@yjۉ8> V&UMyx7ӘҔ4atCOi$f.cg > 9 \k+L%Lcۅg>qoaLGOn]!y8 <~+FN%ʏ3/M /|I[$[8dm[> ?L'oyWUOk2P2GG/i ͧˋ?OQ!CI# Xsđ@ue@]Pʀ{< ٍcٷǞs=qR: >k#1bGnMޕx&^DO$uq:rؓ< ]Rxo=^7:!ڡWm--~b9ob.b2G<} /\psh rr05-M-yľ.P"wmS7u.r#v SbBlCO6vql1QGqnۮc/^4v$:nN^ *[7 N/o>XzO廦{wnk>xx16T=۵|>3_7Fa,z~w'>.|G_}~|k{һ (U~ܖ4[ ht}&w?iY^ȫǯnn_yyć~y}uW6l=cz?w|+?tϻ^|9u9VDO|LJ/ 㛣~˩y/=nϮRwl^9^k,vNVserstDOo5ɧnmp$=~nMץ]Ӆ84! @5I$N},.It(cw4{7DF= tߣ. m`7&Gk탶(tڎ<ɟ:xΏ^Ԃ=FNwC;J-1qHq:q~2*n\R%_#| d׏抟n}AFni0a_0O.z''~t?6EzwCKrM86)54^n/Ԏc]|?(oJ%|~wv6v7Kinߏנ1z/r\<_F's}n|_<\z)]+:M:힧8Z]CJq8t{Gmv!ioi8yc? ?^>g:sٮFQsj!7[ߌzM7SɝcֺTz똠t[AiBގ 22}~h1D. qhُQis|L9N}풿dcQ//Iڷa .qsqwaxMql%o'vrߎ?/_^}7Yo7/~%o'BA͔.gZWS|č3_NSEw_npwz z)NuE;% pvvJR\VvJRNI+))% JcfthFWJKt])%S usp J3ѕ] t])e䪫2b 0%2+Yѕ+vUWg 銵B0+&瑭Jhc(^WBW]4?s~a]M j-/4w5}aɠ+_uu{>:CR`ft%])-`RʔV NS}[1kuݩmupc[q{﮿f8s㹋woOW7/ۼ{6^9m8ѓ}qgS~b{n{oˡKN^jtp&'mߴ207W]];7%9pg\}{0+5,#/G۶:H[qb5\]uWgsw53J[d\ch95Ss;7s` E1&7hBhCGJPF1ҕ3I~$?JQJUW+0 J])n0+\jB)[J82+]z_,rp^ZWp,Z ]ͣ,- 3tz x28 ѕ`EWJW]PWҕ#]-<'+Rڥ Qb*uѻT𢓰p3] 0]t23/䐎W9f1AO/N~`}w`\=oiA= 9ngmEYO/Bl(g`@2)nVr6Xz&BVA@4+bRJhuW]PW(zNtlKמ=_WJ UWkU>ҕک7Pܥ7GXrwjj=,] wh;Ug*kl*]WJ `H.Z;;sWL+-u V+4hLٌZѕ&_|t~]>9H?f[xi<ڰP, j3tz U)] pgFW])-AJpUWkUT!] pB;ѕf+Ε+,]+Jڋ^yp  wܙNSb?xN!R]Rx?OӈC6Mp v ]z yK&c&VMj783s5JQJZ0yjxM.] -+UWkUJ1dP)ۉ$Bˡx])%*HtSvћѕYѕPUW+[9KsW p7 *m,>RJ UW+d)w,]W])-u%+8pId⺚L2Ạ̊}uQ ]AձCE J)ѕVtї+,mÔWU@H;T-񍣍y{qg-`'*Aˬゔ4hkrc 5uK3k8egSJSLk+[]).ѕfW} uɐ8e;3KKGͣ(]WJ֨+d.ҕ';R\Vt%aRV9*i-])n VtX2ꊜ`iJ JpyhcɠRbZAB4+JqJiS(]WJI5yt=9'y'p_Qt5v'(}^pءC`dFW`EWJt])%c u<Jq#YѕҖ])eiuWUW+-~tYE ߻oqgvE7hlF[Xiri}4hL>&hBij2s&VM@pؐ^aTR\4(mΥJ)K'Uu* 鏥eNΓ0e+͡J)KۆUtB .ҕev=ygQFVu۳=WWJq[ѕЊJוRBZ(,M+p3.ޙjWڥG5Z8bɐL k<<וRb~]>H˯28wdpRsW(/l4CWء!] pZj.D+R J)Kz]ii tJpɃ])-u)T]QWj6gS8nq^+ݘfNݘҁ kW4uyubL6 5yg:%v ;\;rP&vf7'+9RQ(Έ1gtlJWy;RܥG(sZK3 LތmJ)jJ(Ɇt% ])nvVtKו&kUFFgsAX~i,`KYin(w*qU\uu!] .e3R\NVt%>u%*`{|Hy3Q"AmǒΙ Hk p3VܐhZi1iUӫtQ+M;suI?M#XYYm U| >YZΌw]rSB}Z^$ZaCJq=YѕҾt|])e*qpR2(Pt[ѕ.',JZ23Ugft%E+RڐKוRBMר+b"g)RlHqL2((~"]сCOg_;x~Rp_8GQbac;tҕw_eJqXѕ+ @Pu* .3DC ftK?GTyHUWkSXXyo?x,%=ǽNff+ ZWp嵸yL:~{`y'tvZdi&\PL.p]6Sv9ۈu727q0Ph=co>]]\ekinFE1GC |ؘi@iIJjDI"|I*JnQ Tf,6U1NW0I}˿>~Hpvt?oG+ǔv /O ro"Xtl~N{iaĊOK}! ~5.Ua mkդ1MG<^_3{_1-9O#qD'>3ҩ_ -Lt&/I\qJt*P[9VyΘ!@P1:Kʴʈe9+vO9#_m|D/!båJK4LaxNjQL2!ђ=5(ziηي,@JfcC1Ei#&yR&o8-iLB")Ss;6UKIGRmFiQrYl+V6@RRj&3/J6v%BdF1+J|XM&+;,&|IC 6w*&R05nr0$|$(ox3һP d ӀK'URƎƊy!N$u-6b6+1,b ZggcGdc#L)jkSu1O~hCTXJ0TCiqf{{WyFol"߾}b_@Mcv' ПFN)]FOf!~ /b?4<-eLzӅ@d(Sez>dr5&');8N35*9Ѥ?^WOΗ:9'f1;@G#lF:4!u7Ph~3\s-~z8P r pTI7¡j:*T] ¡$_yUwK5d~F>Cb|6/Z{57,kiTo)!7o^#ݳ^#޿5gD|홽` Jwu ,…l!%WF*aylDB$c& EEŤAH$Im& QB fk&Ѱf3zU XM(zܻ[\?A856Dk!qi_C K7'NW  4bpcc6ʷ !B}v\;2JdLlA҃g,mmV3G 19Cdsq+O\^yҌ({IvՀ<3[JLY'rD$ |JQi$ [ER˨KNaT JK[D}&h MZF(BbDĿp&jԻ FfsnV^ '65Q?s'T E^›vbǓi9QIR${!~N]j^g֚vӘHq=8 8Kv[}(>gO&LDǶl X9 XЙ" .:ZoI3yBT"Yϩ8!KE~ש=B??u Ә Dwp钉H:Cȴe("HO%y_+zF5WåthdB'8 `T-͸$p!_u(h>iXu.8O=dN9Cm tW y9XBS<;NO=.9CCM㍳{Ghtrۛ\\g'بXlv.ȯ]i m +@  Ф YD4*3M"&u7"MgXABu8`K}\rTixύxU ^dBbyKkB"&}ۇG?}6?<6`*kwݾzjW! 4¼IXݜ}yMZa+yнL?O^1ur{P%8UYi[Nc@= ֡^ϚR#}$i4i:(r,DI8.X,$!Ahe(2x #J 8`ThWN` FjBLV%jN#R cb=sb&x":l9~ך8wي>_(SÑ[}Y=n))]\t)/Hj=E߃bF* oH7R Շ~T)U yi9i {D:4 Ƥ[fwt=[zx5iK_tvmo!6$+m{׵~|JbПT*%\Z3j݌fۻwth=tH-d3mݿm]ww>#+{r>#_ݾex?}t~i{NTޟa?FҨ.Ts>y}Shw;u%ÉFUN|S ^bn](BZ{JS\t:o>H)DLdȤK=<IM҅::(:eÐxVFEP GiJBw (Pk1)$1"ȵ!f 䘸ַcL[grS|L]?O.oi3ҬO8du:Uڅ™R*2 0r}JͨѮc3"-~8~*lPwP^;Ю`i2˸|5-hVk4m4dg;GV$J,8Y^ 5t5%&xvQV `Uvªz8j,m Ki^ R [:zNk* o9Ik>UJ=߃15lfb"|=pUQJ4=fWaϛ6ЏיkSqFtE,F=:&꘸cj[U.{;Gk3I cR;Y *H0۠K}}/7J[5T".Hl,xE7-#$HF2{"SeCmM݆F 'XJAcxukzW] Wm+ںŧxDnQ%Gc$-hT Y@C<$EzWr.1fiH(!`\.K 0 9=cka|akT_([+u'o^ܳtn&ɟ6`o/b4y53-J0@t\$=%i1 ^I&Uj ʫi*ك,Q*Yݦ.YN NA$gIfJwHػ&AXn5ݘ3EjHnMk^`l#"u%R%h XVUVeTfėq>hΦ~ j]Amңvnxİׂ H18fƥĹp6#H( 6` elBB  4Cv4HR(QpXG i[#f<̍+cW5iˈH{DqLJuL2=e`l f6Mxdj&*,t rzu@ 3uS/ gN X҄Ij$f8vzȹ#QR:[]q7q31%10ϤJK+KG1 pm\Ůakܱ+a-܋c@&'qBT_S}yV^[pzǛc Bx =9=;=.WX:Y%?|6wJ1ht G2z, K:(:fTJmKϽԨݰ{l]#_ֹwA%z\T6;=s^~m}GXG` J ,#B4=" "$^\ " 6 +TkL*$grPSR*Tr< *dPZoC48:M,azo˱Zl~M";(;x0{ <"xM!̹K(H#CAU3d5s:}޵g6_zG yl0HO!`@G wʱRs)@5b dΦ"! }Ҹv'M^ʳ?Wc*XI4 Ju8m< RG $QgZqAf {oMW7EբAfo8$e&Rڶfa-lI_jēn 3w_~(eGY)0u @/U4X5k@K=o㻸6OKC5yQ }#wVBC յg{G1,5NΓ_R]{d:<џzbt?ϲ9B9,hf(.^q>;;]T&g heޠ u*c,t*Zcl܁#-ᬔ.'R$^k5CNLR>=g@O33M\ziYŀHpe$^aBʦT~\N +u{6m4) OeOqO8 U7+;T综`Y3YS]Z_%:Ƀm6Clv(;AYkC-8>/nr2{ȄȹwK3uݝwEٺCw^ls_Z1|yҮe{& lu3{>Q oƴ6, 6QEb2'|CG_vbkiwKfmS .H˳.zmB:ؼGm3Q6`-5SJ{+M½QSk)6LZi^y$5l/:]ľ>Zp{!{dP>(fcQ$WS˃*,2B+ŤV@/qEX{o{vcUë)ld(0I}T#EUBjK=7f,POd6???@ ߧgp7ro_߉Z tU&L2(Pz`,_/S?#nT y-\H›(hɑ]'eӋmzXUpH F]R\Ȋ;㢪jYyk|(wd~~[M,e6_~׶Yݺaܘ꣩,\g:?yg֓@T xZ⯃<8a@'R ,@N8f I /ENׅڑ{M@:TU9cj7jH 72wG ,/;)iDgc alL`Q (W`JlB Ozg(,<)+R5m>k[,hڙmWv=D `I_a gS~5ẘ!m]†iFsN[ kb&] nTše Y&AlZMa)x. *oKvSTxGDJNcRɜRafXTqXDI$7dum1g(`0 E.B(3,!T(Fp l1i`!iL'q*(|{n6%XI$eJ-N PШT ')1Y SХ m%]A-۶:&ЈqNyč#~X*hԠp Vz˘3g[V ğv_dk'ৣǰm̙I:  g|P?8hJX\PI](݇L=qm+gٳį9aߋEH7liއ0EdxRP"p<J N[lBhrׯkM3kr9}VSt.s`,>cnf|oԌk'|ڞWQh4:X_Ԋ ݭZA~jA"ÿ`8JeE旼  TΊ$~ͷ~)o8[^+fW1'4NƇGR۵͑l:t@|z دI>g5{bN!p:||: F0bZgy5ЋUrǁ+A{>צoUDHUH1a6C!,lVc+/kwxeiq;s#`ǿ}7ߧ?0Q^|ÛF`, &3 ܉@|^;v U]SnӵjE ^ton̎]D짳_L%Ge!?M«U,qR\O}GlGBEkdal`62}@G|` ҄j}5A" }oa<|Niv"1jcc)2͌`G k2q L}g8ar%,ra[Ap4!;Gm[zqlWIgz"D%r}8z\>BSDJBk{]+ WE vʵS`FU"}D<\ҷz=D1VsV֮ǁd3p{u1U=+ X<\%rJ uJTpaX!GpW\A}0pwJT*շW\ruV }ʃ2.nZ; OAjKGfOۯeٲha4'093|ai@+Z2LVORi̳xR-3S"m ^MڏtX.Ǥ-x߽&n h$V'P)Q0WuuE1h,[OOҝດTd8W޾~IрRjh1ArV2;˜)c6AGT6b_D6TbBzm&X}+gފ\j>\itW\]қAxR=[cЪg8=`ZZiuJbWшΜ7^7NnѳDST;:n ͎Qۄ?w9\ɵ|Vf0YUr(>ʭ-;4FEdE恹z0, rM@؍k?[Cu7LfExy12O3Y{@rRU X( ǞX,f";P e6=F?x}-hχL59(W S~eFhCH(fAE+n&ՆS⥢Ya96)a =OItԿVjbeXb(E#*Kc'S"dOEs4ؠpQHxR@DD1&2p(QHYufZ`24f,`ᵌFMFSH- ZS-#I~K1r%TH2O"@2Ԃ&ŃdL@(j 2 Eݍज़H04T: tȝ06:ѥl̡tt$"] 7p4ffG82 ᓱT[Kv`6(#xD ޵q$_!y,vv恑W[+Dy{EXb]Pd_NeF8Q'(\ cUŶ1b5/&Q9>q" @#.D_IگۗݛW_!Mu6p`:DyKP[ $J[ԥAWUA2țk#8s#Ԭf[ 7vQ= |iԱ;y\kn`ts=i:wK9jyzvA7Hcm V͘QU>6t&$&5:},֓ڮFROhC2j(U.둇 -Urs`A v |z^ud dGh]udV@N`Nڨt&THd|"0`)dfR|WZX_ h;oalX5 "(e*9<Ī7J|YZ,X]A8&_!X{ɺ*VCwdus0^=VlhE@P5\`AnzdF)*:T16hS5B-gf<X|.H1Ű&؁qoktufWHiRTL`6(:\;GtZ, |FEEt(Mj*6 FDg7 Xm#d*zSJi_t#fH57]Ґ?;1`uwk@ JAU.1a RLqPdy_uavTL@TC7/S5qC%:sQ%@ `>CA2MC{?7ڛZ(S1)E2GRF;U B!Ъ#K*%d@BYAPS*l3+_CP u9kjɬ J*H{m(`+Xo $$Ɨ e،vP":J(k! e 1P #qҰI&vn@@ՊX{ɇ K_} (ĥj*A BqŘAQRrҼu`s#vX ? ]r *}tl|W2GH}1[Zm\U&1:#'`]@EE ]ʃJ I"92Xe(X L9Z~OG30|dH֨Vx$ ;< c *YȩՏoTG}^ uT'ێjJz%B`a%vsχg0X1z, mM!gT]lJvl 䁈:u)C`E[LnU2sF/C̓_D(D td+! "Ϊ$12iU(- ɀ!jPk GEDޙ7?ʰ*+ᠻFRC!ۗ55KA2Zu#&)ѠTf5áRk4GoMu^I"BZ&B[퐄M٤$E}>F9KP0\qj#>ϗ`z;׸ /vmvǴ|]\ϕIvY0 36{ 6YS@ظ2}B(jqHu\\ ?k1i3jFq5R 2j{+WUȿ;<.2#lRӑvâDy ֵؓJC6G=T3ʍڛ\;(jVd*! %L PPvƾ:B+Ikj]Є:HrR7yciUè#O(ᲨHitrIFXp \&XW!lKULmyLG[Pc@sກ6|$f-+5~Pc Rת%_&3T;\Aj},ޣ~]ے>XJF$jFO .FWr܋ .!b!c-ٱPΔZ"T$baBeN V3V !KkBE9:W뉟XD;j0BI>zg W[x#_^rqVﮒEhw=X#Ҏcgq=iܨ8}Rj9H]:,J̮nv^|^]qm׷־돚Aj}1I9]B3<}379-l|w 2D+6ppi+FiNpp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbՃpujh2Ns / ttÕCpuY WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\a WQCQd 1f"x^mp#1\)S WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\! WND+Mcbbb.mp()LÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbZp sWw1+u{{Y*L0nr,ڼ? (m)0 M diZйtHWgNCWku- F*LDWt<*rneBW'HWx_Dtŀ}n"}mb+6N=U#ܹ]-Hǥhʩ ]}kJDt4tp5܋C?Ǽ?]ibMBW'HWƙDtѻii h[+F BWEWQTt09 r~-n%fLK3eMgß^{҆|ɹyi6.y_!~v[m\s,~~2&gXxOnv~ˆ9xտ7ïvce]!\^qaE݋z%c3>eݻo^{]WTWKy߽ťZF=֊I  obZmn&}ʉo_W;1?^+Fm/7:qK]4vTUWnޘO5A5fy9\OrjZ fYv)8Mv=q{M2OL=&.\{3$Rw.PwVjbye&l יYj6F6_1[zf;\0NDW 8s p]1Zyb;KIqymL .M[+F)ҕt"cҎ|!\ft~t()UGc ZnF]q #5NXnSTq"⎞a-b4ͽ}LZ*J&y]1\OM/Ro^?U!NWKGEt ǥeh㑬}Pj z]izIz"`Kvb^BWvt ҕqA=|6o2 ;$o)A%S?t1ޟ4Z[vߥΨhgfv:pUϩfpILvםM ^ZoU>Mzh{QTcwǨlǛ?>_+㓋h?/r|]\\_wc]>?4p}_? ٵ]d0A+{?>(1}7wOi"! ކi,uv! HF Y}T3&d6MCW Mh:]Q&+|Dt[+,thC:]1N\2VTyj0ܐf+%yu(u:A򉢞I]1`uqbneBW'HW!gwŀn64r*)e&+ّp4[܀淸1J-tЕԛJ=J3/"۞ {,C{^=Pƍݻ2 ]}k8 ]1ܔf+NW8+B3N&g+F{^=Pr ]]ynՖNn~4= {9hef]'?+cєx6}ƲLVs£rqjErӔy=|J!J6v:7KhK6FR=JF|BjZn6QJҕVR]!`upe2iBV>Q.~te I9XՈpIn=]!J#:bJ8O g-G^tCjzjm't%;ԡF +|xg\S+Dkt QZSծgj>٣GyFP!c}͔J[[qQmE]6R[~AK1525Kƚ@bM Z#nM J:k nM+D*thyD)dGWGHW iSbj.gt(eZ>FVkR ]!\ ]!ZzgQ ҕ 3N fhEADٶ:z2j]`t2tp}T3 T]LI'Ԏpm2thmu(Yg]}9tvz'DYv=rCڛnVumt e- R JutCO컢JWXR ]!\FR+D{P{3mKdqJ+QMr28jv ` ~?Ow:^B7zY=z}>sOX9imܻU7e\V3w*}vL.GB߸6Oq|]v-@dž4gNCN'~d oGs$Sۀ9P^Hrq*y h/GYQ)-{в]Z[NR`z6PV(H%욗PCfAՂIv ;?)!}ii}Vt BWgo;j]R}JN>|NyhSMze\ل5,KXC<*vc QjkGhgBn#eChoknm Ԭ]Z&sVZ1v#S [BE(3ݕv[8;,@E*>>Tَgp]!`N j ]!ZKNWR.}J2"uC k ]!\LDJNWxJ1IJVXmUMm+DYGWGHWIiBtm: 5$BRtHW.Rz=1 vBJttute95"% t+k+@kI QrJ8OT-p>3MZy gJݲA݀tGW:IXBtDBW#+&)~#EWװT Zv)#+%lomz|P>^_F-klm2=F1nI*.\R ߟ ojVARtuLDWؚd >t(JJBNJ֩mPRڅj tNf! ;ҕT|vm12zQپd_Ūe@?.BzyFa9S#+HNe, ꂱ9!cZh> @T9BZӋM'T o;>kHr[O}{"{6[8x{20V?dC<%?O/U)YvRBR8-Um8%SqR7By`՗.4[^\>uv*mw1•8WI0rgb_uy'X>YB ``Ac&07-eO9d ?׷*V|b6#Z|])@n J;>]/x5[;Ԕ͛PӫK6pp|?Nd7;}#C?Q]-r=\ i2҃ͷ+ٯ, uQblԛ0qBQ4KaGxNTZGi l;<] \а9:z]az5}̀.՟b-OM狡tV}V(\ݸOXW0(f~zI|3LضדzоZń?4˗qX%&r$\LG@T9SCyrV+)}7||՟/ql{w" ۜ:OH]$VR{0 p6](AqR]GH՛M4u!U=].\I"'an۪bPc*mdz~OA6MbکT6mXyoӹ8N&&!|[5lk5m.,/AiCLK~^Y|:_;z~]|@duI2Z׍V붶Z/gq#XtVnXc6-2 [Y;-TLV6T3?8q g?<ǟ/W?>y2soOO_fäCA|;Uݫ)Us#vիߡ^)-Uժ`Yj~3 'l4Sx|tz~1raޝgn<ήg+@3 5?|"JTJ?ե .@7<>(y=ieue>llrW=QG7F>>Ƹ;^I1C,0`Ȋ%X.ESsNJY2.p=ذ^=nSaM5tv~|:=ݹ|ރY%к'&ZyAT{! œ|*\} :*1բʇ t,HOBc9dJJ@Grs*E <۳ Ǖxcu/+ PfԎMYqC N./^0<b~RkԍO鳔~n~Kw0y\is6X'M$\0.h&z0G\ۄ կ#=~'a9M(S,jn-p0Q$'DQ m6_i],W,FJ)ݫ . `O4,Wlf%E"̉J8KtV|J][_3S_̂6+T 5[ȂLCM<s k F>)"e [wMNSڣ)AJʕO*EBj8E Xeeܐ~cN0(䞚y̧Yn`)c ]8WX+dV!1Q~^t XalI>XMTP^uk*k \ ?;kbmn<q@pĕ{M.xp!Zծ?W8p=(k~Г&/STIvQ!ȵQg1=!Ѹ^m$ םj%f1 >`67mT`}#HIċ$E eꔫfZ8,}2 l1E`::幡ά MY :)|8}dIӕ[ԋ {)iզңnWxgGĎΎ1\Xs`0H wD>D eV)J=AD L@X$!7݊f:[̖ۛ$'Blv~յIlX15,TBA)NpMN)em\,LQp/>֍*SЄN'ǹv9VMN貧,R݈!TSz9{<dץmC qۜw_S冸,),adEp3XwVK4:5=o|p@L[&^aP|عU=0TR dzŨjG(wыyAhJq! ȲR9qΐ5aֆ,7Om̵k|V3g{YASZ:%a׬4[U/<ܺc/GO1߉bjƘ~+<tPFNa2 b"n_u‡:^i铿oŐ>t=~~J'Ա?DN:OYka]&f{R"ͩ_i&ɲrO@g혹b-u _AxhDG^롋o}.}B{O<{/ѐb߷Aw#?Q<:~v:u.0ǿ:E j|&;ex݀qyr;}0f:wW%/|};jt= [+@AuQ!9W*YhDag@ZNuy(>S̚(jUAFiy)gΣ'g cNd/{yDaU}UYoK)zǀ;Nr2)Fg29rD6^U3gG]A_7s@N5XQC.1DEp&73!h!Zs;^̦a93J#wA߿(#~ |z{s9EVKVɆ5N{0 4Nrcn7Y})Yex,Zmb&ٌX_"]-ql~ ? Ra]BpQʘ2Yr8Hɾa IʖMf VFq҂=WΞ}Gtq4^_8,dr pvry7-ax>n'Fa:6*zo>oοlk6Y9 ^}uۅk㩟tqlg#76POhe LS~! $Wۢ n￴n֛OOs{ 0/dlJH~{҂piA.ON$'$(sI.(V\Ƌ±$T 01s"eoRLW`Z'hQnә G@Pf5jKQ\9iV͜ $w|Je`}~}$~Og5}zx3Y%VdkIzanG]ijfJ`%7Kw⫶xY(_DG/\ m7 coPԀ.{z*Po;֧YW<+{3g\o*JW\g8 ]]6VS Bbz>u74r&& :u)$P 椕N7 JuȠwmǯ#ə{cb;7>y1r_81=bzL+iȳ.Ӝ^5*K)on9Ef#/;R΁lGWYv@K*0Qȓvkl{NLصLҠ @3ތjQξ"$3 `@T=6HWKi[|YViUOҪg~gF8X1Zƻ5KpwM)A+p&wdtL|pY6B;#g5DE2`ƐBK# a%fedRGuyt{<\,t\OlQ;`P)笰ؕH~7\,Yn|158璦 BTBN7Z2xa~zG5oj8?RTBGjYF%e> &Ha 0 {|ΠT $SL{b AXwHs:`{VjY{KogP]bCM=﫳N×G`^{NG '1vZrgSǘ֜KcVsuJPccLu:;Q[ڪ{>MY Tn03@z&1e},o11'TĜDY +RTrkQeچ$B ˘ k]٧꿟r44n87#:vy9g3U VYǔYAՎ'Ӟkم巘Px$FH%fCZ͜5]NۤE?Ydw5?Y`Kv?e%/T7< +*#Wk($˖gtN20$'J44KI[i(wG^59=+{AYrAГB)fbvH\6g@-s j#c5s#c=R yƾX(*c!XR~~[Jwpٳϳ0?A~4<~Y##N^%J7(%0N3BQ45SJ2Γ@;H+3c(FBRkFh2a#tٙC5i-s#TPw j{4͍ R8Wi-8-Rh!#G _S.JSL̐EA&CXMđ0>@,PP be=$O: GcJ `A遡Cf Lvh%h(FL`#C|H,¢ >U)4џBQ-w3bj{WM.^?[biuXDGSlG( C!q|шVDh97,$C Ld%LWޕ$W)쟱w\dއcv;hwcu B<, 6XER"KȒDeV1EfS=<_ ,$D} 0;p (RJͥhȜMEC\h׍&?g6G5ǓzDTca$Q^Ҁ(EE$`J@R;Iδ:,:vRߚԯ;o֫ТAfwmn#Nmn`o'-6xfA^L܇0ϫF)A8r'fY.@{"e\;/vjj|״ji]Өl~WV9~>tVB=eR>ьoat9L8}I`OhTS,Nޟ^?\Plt,x+5T(MaC 5DqُrQM hu٠~ u*e\8&TVuڭ4{= gul3g&| C] M*_2L˳EҽE˗>D3NY{_/L0mY:L^R<oЌ݂Tiݢ2YC0p~彾Tуͥ7t_?_]+VɧE!׫_]3S]`\h4uvb\Mn])*iC%Ye^h[&,Mťkn[$&E*73ֻXjig]NXJ"a-rld(gCg2{̌W=俬fL1\tBec*O׼Sԟ쮞{Mq5랍6 lk;'PTtycz%V~rCu>r@u5C>1#ȪR>r_r+Ƀ.Ύx;\+#:FJ {CF-8> r_I SWf; UfZ{f>7r:6l=T To Xvmn[b&jr@_ XvqgiHfP )Hwʳ)z)߼DuWm7,DmZj9!W4{"RlӲHZNkZ9t/̻}}E-]& (#+#cA1 2DD ZlPa1$Z-&kE%t'ٛ54Y"* eE6M=&P}Y`0D`We*t"X'>5 쇫D%%մTt> tg{;ӃtZSGR<͍i龜9{Cjj=h}G,0_3_C9Q6ʬKsa8 yF/ ';G]z|O5 '9㤭㭮STCڽ\542H;cKڑNr8C ))kDG"\JlB Ozve:%~cFΎih<Ֆze/5Q2p9l)0 { Ccʦړ!. #Zò\aYvS*JJ lfҥJAsR1d:|rD*b_uxU׶:lYQ"R")98-(fEwZI%HD0 AP$@2qQwWy, <Fcc֝QAz(( ( ;-3$$ ^qCmM`%PШ`L:LYJXy Rs +HtaskBG8J J 3([Ƅ0wvE9OD:Iɚ_ƙ??~ e}y883L'1ڇ巊~QEq88X-ի];Ɠ/o; THoi4 ia֙]H9ƓR? YLdbؓ\XL`;ùZ͗_*T79Uѽ#ND n[:lW[]G]Vu*&Ԃ.Q2 3dtn03971$b ZFl? 8aearAΕ-9?N4<0smCi+_ GRp-9 >FEc u\<xes"%+OE6{% 'rZisZ E^>o@ *T[#hDdՄWDjM5ޯm@LG eZH^ˈiD hnD [#gNjmΫ,|5ָ\fT win^B0r;yEYq^Lջ~8C݇ӎ_`YY1=_OgJP΄1R(O s됉HK?:|BATl&;{NpUӲBV,vn* @>ʔv=,o;{_S@Chׅ- b ,?AfP&vs˗d j*J"dUoU{kz)ph ɪg+.ªwmXUXeo-\nrmm2;;d]̟Mp4I8)?־>t*IJݕFOS2"#΍fOulyZakO ںq<.P%=*\vM?MgCVZ{e#e{V6Zt~oo!6HjKMUZ/4j)[\XUSz{؜Jev;w[KVXknk[5 <0yv}?[}tm??J6x~zqu^[7zoR5(\K`S ~E)N6P][+Wi5P)j`m>VMvɧf]˂]sV@b Im˂wn: $Ɠ9tc2C")y Dezr l*$|kZ{-v4R፧:(B `\ "8O` y*hVL=|\R@xLqv{;Y V4,gL{ZRn|&T%_ f>ˤU1@Ǔ{ ^gaKu^߽hB"^Wb?^%lsW'%00spxh=G+S(iy7'bqR\Xq㢖L NL88g q|53{/^I^7=B]5⽻F&ql/۬ly,[KxoI8ذfX~ͥw^۠hEG-.j2.F?|7hˮ&Mz"o8Z{3@mlvF^N9WϏzAf1,N1  ȺHWY)b\J'呌L%jӶx7|M}t1p,]wUōs3;*;wWlR׸b*;ϛm}uzDak^數Fy+b#uhy+I_ш3>gLFP.L>JF%S8c# 2-Ir+W,qQ}+9IHjR 3),8R#NDM0< (E#l$$3#hiJ"'4gJ/8k!{60ceRy NBP#Qҥw#JxL̮v1 j{M `$X*'sƄENJFHcWi*` _h!#3VQ!(^"qD օ$hTGema<,&n<VƮ b1x("ˆ{Dqkԗ$L@8Fs`$a"sd4J: Y4%D,TF$AKg\p^CQRţ-i[E˥gG޳]ԡ]Z,%EQiԃQa#,F9TQ/ {€@gqHRԤv$r`++ E %}M;M;Mm1~VQ2oQAyts*=#"9hO!;#LM|\)b) 2l%{fq~@g#rۣ]^vîfz,{ܯ=y \9.r'Q(iT8 f!rlA7S:C"Q3U2VSxxs_p('B$ҀD$͎}"F..!3->x7]OCu8>յ؁%W3"ޭ/xVG <2Me"¥ H@REHU@ZՇ,<, ==|i%,,&B>e->q'I*Phs<$ڊǸ,<[?zz#ogFq%*pNOXI"F:-$&ب` znR1/(A2%ԽHQi R x^vxE}U<$ᣈx2OsJR^>&yt$ nxT{{rվcj\ۧo-߽$oUo z-?\YV|3Sbo>aVu?*fpBy?&˵K j.G:\*%k7 RrlQb8Ӹ5%߅pvjGJx6V,%mpvhs,*'OMr 都Gӽ7{BOf<9=ARlZ]O?j>Smp5ʡQKjn$?<ۃH>(.:wƣAvWDoʐ7ܰiV,Rwݍ`>n.^&,GK7ז /Ͽb/V\<ֿBf s*r35r]5^p)h^2I]sb>#F ;iVceKZ@7',gE5Nit0Swb~w[_W/m/qv|5ӳoPJC(h̝DľYpY.a31ʰ7}=*|nnhn];MwԹYp_P:ݲ,UKKVaטi&9RE º{N%E,DU~硫<멠6u}YڶI.R hmx/%gc`u9N-(gA I_)>w|yE |@1!V58!e"8r w"drT)Ke=>V8>\n{p񲼰Xh.lqPQ!n'A '`36񹟞,ϢfV[08b͝e8kdelHU<PG$o}ZkTS9k[,z+u}H&ZS 1 ME ygLDU=InYϣn) GƣZҞR=ҒE!KoȫHgVHHB'732 0:~?{׺Ƒ_ȟ/7a;d'X`w 2p(;J:/pTg(␴Ԓi{bDgzkzzudy 0Đ*'C3{.؍ͻ5gzq9 OϿI+t}w!N YJǺ!.,qyh.iQ餉JR(͵:.̠кP>}9>,DMi6JװO<jհO<ăJuP`x^}% }%SIͰ%n9ܛxm 9xz;WWIGD Voш0X~w̿V6]N[ߛ/wH\}KTNHLK/yR2ёHRYz(%R=b<8L#Ï̡^U &W{.0EK# ȬyxĝƕFyF6a=  oT1|`r[{1A1k;{Np^⼂T.`%,x#+5W9XEX`ikܿU>ЃX5oVު J,DxH1ϔTҊQ1rv+F!QeWmVŃO#]^ ߓ{8%[g"8>IȂm]AGYXt}չm GC R6# P^Hxd4 O+`AT>y 6RBǢbȽhTSay*8I3.H@=GjdOZU U6F)D;MM.x뜡DB0*QYTWpR9& +/lIU:x HT}!e1XC2#&g脲jDKZy-t>h $fGQFtT$H 2O}B<wNѾ#gy7r;~˃I ]O> %mϷE\4qm͚]g$;߼EQyE[|$M#22 #3i!r* ̅ FM¥xhQdZsD SFaS!ǂAVE$FrKJ#c1rv#c9]b!+t{/\*9Țqώfڕ9ﶾbROAe'ٻɸ^|-Q,f@hăztDS-MIҨ9VJ+ylwqaK%8 A1TB  LD:ϛ#Jލ(EnviX31qǡ6P{`q<Prh ^P*m8)E%FՐ,hJ#TY 6 p`㣅!= &"L{#b^.$G:*k a1rvacw1XmFD>  8>&ah1|r$\('(B@W1"*NM򠀥3.8!(Q{4ŭʊˆX=y_Gq5Ea\\RZQ:|RmSNؓA6>>}a1-x(na/EYol`r?$QƋYu0ZH)8*M_9EP2RJ[vc@l,/,m]7q!9oHG6d=)IwB"z蕜ё`0^Η8&4w^j4w_4zNZn0?{iףR>1/|#puIwxOlSO@Uɍ~M_^4.s\pAd~|RG\ӦAw ȿ:\*%utZsrb)R?)탶זﷸMl,C{[y2 #mp-fZݽcimM;8߮C ۼmx$Gƺ =Զ;O:6a?m{kCd<2Pg6]|0uɬ92% ^*KmxsUwej^CL&H2 པLhM&Չr8)('[S^Xt|~E |s{%cBb!jpC4DPr w"drT)$[é>Vu\4[7>leƒǪ>^=ǫ31TG:U-N"V˂w=UDVLVEe ӖSwNAZ%}zPVJov*謺-Y 3iQd2e@Ha14,'PB4Qu.+5.8ㅪ>~oW )R'J3^%)XHڀF+JJD(Qr%@mq* C!iUAX:kiB m\ǘ{$4y΢+Ȉ[(s2s4#HFr3{ε*@;b5)92b-F ۥ5YiQ61@,uW(̩ћLDGfL4*X v~IH~{] (*0aNF<1zm8j87=Ҹr(1ZGGb< khA7U&cn}>5.#8 S%9ٺgnryT/ώT$677ZOHt6tByk݊p6O2EdG? kSo܎WQgW_B''NjKSF*S9Ƒ߫ԑjzy׉9OI;Gٵl\T?/xrx^x$F+boiO_/疛]sħϐV;otH 6t7 Fa֙EYO8y*>N.[^Q>j}fRըuvNFoy#|Q,2l#:-FjMnv*'l<]kdǿ ٷ?|xWO{2g^W ,EᨏLw";pckyǡqnhauɷn'|q|qof5nc̎jm/~{2 ߎk#.=f#o\h•zF{F1?k,.n7T I*eހ%KB%90- H^jZ˵$4DA6&"K fc`Q9ʸ#hDy )Y8pk;A^HpcRDC(joJ E%b8u!ɁzF ƨhX )$ÞN= 2* 20%V}vYm(9lgZ5| eU uT;0By` *RVC12SR`P ~ $0~;`%~@JP"P)QTX.HA=I΁i%&DD2:DJwmH4u 2bg ;_n.l]lɐbfG-r˖LD.Sŧܑ /ʃ/ʃPʎVbKy* P-u+2918G6S(iM!j <Phw[u-/ T L(Oש{ qpzne]w>Nɚ"4o;>) PĤK\VYjr0\*$1w:~A|I(ʾ0"Y B0b>E愳1Hܢ)"Dv'll#]6Cu2+N^uej'][R vU?5~װPg+4,_jq4oYk|?8-Pe2{{w]Ջ~"mjz:8*) W>x8n,ۻQ|{ؼuR*׀4tJ.ݖ홽DYyѣ.ݲeZ,8hMqwB4ecL$ĂG=x1q(<0#yGOV [@"1e *dg"hmL=1{gRO D-m<5e?].WjWܸ{{g0qVl |\q> {Y8jKIB\aZ'+J8prPr,#G,i& هȵlGI'3&-KB$U" 冬o<+1KFγޏ̖ t[z!x3hً??L2%Xi +(7Rfuu 4apiB0 87ȑl:"])FZ!t0iRoW_?o`O'y[]{`{çh~Y>g)Sp¹S2fv:{^X@H;C8Bؐ-Qˣ$F`UL`B;6ˍAA$dRHÙ?ʚDya>+/KZhVㆁYʣ>)Q)M9"0@du%1fw,5]t.`#9: қ8 CӜ,=g-h ڤmQerf! ,ǐ=0Cv,ǘ,"?T @Ξ]LؠZ@%-b6&% a!}Ѽ&nד;xeo rj.gh { g )$?!&lr|؆~TdU9( y`Gryʵ j=Yh&70?C 5wO%ZYi4n͊PNzm\2ڏq6i9dQ[c32UL4sZ0nrdꡧwB\m8`K;H:"Ln.cc}yYEWs,hMoG߽w };6?'nr筱W@O u5 5 MmL|S*;,[{X^ǕJo`VUa.mԻ ?@gzF2mzzq<{ H\\>%y/ B , yqw`(72sѲA!&r.Ay-ߺRk.$`,kK dMV(E@6s! -X&s%{Y>G)y_?50 (-|  ^usI߶˫yҹ$|OJ1販9j>Z!Al'/ 7$"XϽr\r{xɕa&0$ Ĭ΢K.E_>M S2: ǜwR^qc@+,F9fOA{cћ8{NwefuQ:^:%x42DRxF&,DR%QX+]Z.-o_[&'~ 7 4[hy\h)rTMΦa@j6Ӌt oo@UyS|zs}>Ř~wFjek紬Ŭu,Ђnw|W|%TYz_-è֞Ľ_,tj6#0 )ʗB)۟@]@>RK ŅbHb fm9?|uIK1FK)!?,ʺgwh"o꼺JqTpϢVRin^;o5f\}ϴ9í=Mϧ_;PQMj ngxNfKqέ{-Q[y'rmۯE:\[evӵ()mZm{:4q!L6`ZkO/#2_}݀x{YӾBZ q`+N;UQD (d.ti>꜈"R~NDRpN7sN91nEdY.%ߋ&ԭ5j%]`M*>-ԱR/3ٛ,wg*UwJ7sH.(Ӝ>|n-܇GpKeۜ01"cJy"\0NFo*RJ=t=nI^:?ijGEww!+<P?{Ak90q~ވäUy".Rӊ".0D\Rd8JUWbҞ*R*3\>F|nF6۾BghAm .LK՟~d7 >wt;bL3ވR on ff>luf8~y/"FMWH󩚌Sg81xr)V1&S'~&w8<`Wn H;sKƩUMrEU9CfFE~#dc6|ScUO?{%Q2V1/e̒ BE`^ (n|'&D h{D9g 2W$0ȗq{)H{lWH l0WߡҜsՈ /\i:usUv0Wߣ"d%+>r"7W$waD% 3ޫUp!--,8O ,>x[.Vd}eSfh6R]߬0Zպ5nl>nr%?AҴ-R(61_:.xs};woMu6= tTZcf+]#rOc˪zS-OEdHR۫1OSq 5LJڟgosg0{_`7md!t.}8GCB\vč5sA|^!7T6U(_UtD Kо;.s\LS†|mGM͘buh祸 cޒ2I|GSiqJ3 );ǖ5AK hE&LX7`;k',[slK+vƸAN֙)b,[8Xa?'61L b)-ϰÞvx @6BB?M> o|?8;UA$ʡ&,2[^38 M}k6> Ģ#-4Dž[K << ReSIAPSgϊ51(fFR>]E ;Uls=z)D4OsbsŜbg'p Zי7:TK ْ erЀCE148"%)R;8M.3|j,MLΊ>@&=78!خ٢balU{Cݕz- `Q lg[N@ C5XYp$pwL` CLN,*)9%j!N`y f''c6 Jo07TQv6CpnAcYj"zgD;bx@?cBYgZN Cq&E{V^uT@O<8`y:BR)Do14z"2J(;`.dHh0 Rc|ZQ4c L "?h_=| ڥi&ˌL瓊q*Mk$8PESF<.z9$~Ǻ $c6sʠz>nx`{08å;F6T",]Bڤ̰Uڰ' vcB/"Ѻ%M AS+neMv>*7oŝ12xoEUWuS9QVtr@&[WB#1h; -Xavqwp=]K'k7TMTWdvH؄0]8{j"!*kPK_14Fg5r1 E7Ģy8gW% -h_'c;64  uPt h V(UsbIL[ʼg ㉕g4ZB_H3Ȥ |d+Mbqh:BfMdM52QZB,?= F ;ʛ7g!KڰdP>4-Y!3 !(ƎDSeXV9?tX+3Y.f2U#ygTfumgFiRk4[WotpdTa寊̄or6㳅@G>76Qy1ĭ7 y ؈Cxg;Z״ngL&a9ϰnX3*m wXC5\B9 ,o(-x:zB2pX@{GG~]l8f62q4ejdWjŇP0OŲyA7 0J5.UHޮxwR B!c7:Kd!o/kqǓ ?݁܏{i_Ǔ' &>&, ƣaOfI*y;x3=`J7k$M4J([OJ{2 gMiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&vƾ[/Nt8BÜ $jm> $k@&bx!MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&v=Rf$f?LHC|E@{L%zΣ6J3 Dn j.f9,d]M@V\%Wk_'G|9_|GˬX>~J'Ou@m^3/O?uE];ix=??C. 0)aBׯLqsOKrH\J?O/jdcMmh˗"Zavݡvu){}d^=1_9??e;ni>P5Pdy\Aa'=*Lֹ Bik븂\zu_+Q+_+QW6Dė2YerqH- e* peW}Mp%sWLWJTzR\Wr~rl7NX&7Dm޼rrj 1Pdc{9Ŝ\8[p1BR8pko4 !>gN|gsޗM9惜>ͭ:|0{T>2"8āȽ2ejSںJ2Fq0[ui\KE䖩d!X~ J83"(nW2Tq5•~Z\kqWw{9D%#6d0Ώ+QKf'$(1#]Ay\ڸ2%qC6q$\Au"S ZZx|}%sm^<9o pW}:p|7:Wq%*YqG\9i(\Api\`G}mr\9N[W;Uvɾ,>%56p61-bM⟸tx1Y2fmJr"8a܄MՈڜ&2nbnu9 +%W6حJTF!yj]\fjYu\ʭ¤z\lj@y\An6ø+Q6<z}G\E} WLa\dFԲܕZ WW?qU\A,a^܋OC%9!Cpap%r)+Q{㤖YqZ8e@>Njp Ԧ jʸeWz$C@i\KHmQTRR\WP @yw(FǕduWU1n8)L!w`|dui{GA,6~ =1As.>ٽI4[<`Q3 9LfQ;M'/|sҒ'C񼝧3ʳ.Tټz*8; 8< D.(rWTf!\4X-6X{cubY2Mfe\wbK>FWtr@˂f<`)~DQ܄%޺L&v&|y \Ćh\0\q븂Jj\]Ƚ@%YuW{gy$\qƦap%r0Sˢ6 *W;UȞ7p"+ٻ6$W>2bw`;ߗFЯ7%)J~3|J$EICshM0LOOuwUSU5碮2RIwՕ9+$]er BjkT ٩W sRWHQlU&ל zW{uoG]^])>I3uȧ 'N:yw4*EԕzRzSg`٨L}WO趫++TW9T3RW٨L9uRztl[UDUNhK/iICw&Т̠qT]M?dHn9;&OE&(AEZ; ^X&3<:@p!KL%Pz1;)2G럘d>+s8r\ɑ9K3{ҜOx-M2YT?~u VlU&z]]!QW9+6Ǵ]]er:uխg* +TWq3RW`EF]er;"^]e*+TW)y. 5`N]Fue$Ԋ ǟ3߃`vCob̖x$ ^_}k}؟ì>`ߎX~]=% +B BBD/k6qgKc9`!3t7'ϟ.8'{p9Á.7D.2ldw[1&\RR Qb.D O9U:0Ri"~)mL!z+}OTn(*ozJ3&لduEB=-2xyy3 9jixEz<߿PHI/4O#ȴz ZÛ?|kn6r3_VF!R#" -R5oCaFHFVY^vJUn\3g[>;o1jRv {!]?yTQ7{>#:{ɪ{;$sݳ&/Sy7ۉ?%lیfsXZê'?~_kM&Y T+ͭcjzBuhTRCB$^p*kUw9Lo-STAvQ~R܊ R ]Y Rg-7RO DB-Mը`'I"&pT>@:CdILm'&'}^j~~$~dk,p_!.r4Qy,93o@mg뷖ތF|3owY:_xc=ưƳy dc  Ĺ?hX7Pb[8ߟEo_D9t;u˸p7CU y\*wP;Yfb>_f&#LJ78{6FC^|9[z?a{cLc-Lo G?Gw5ow_,.cD=8[{Yvq{܂].%h3s67)p Ճ/{_$"2‘-w{%)2fglweAL;/#8kpw(*g ]_Do> `FBnRbTB&AD<f JhEU ,}N9c hi92JH=:94F|H{ZO"ڟ4`zȑSĨ&4pT Ekx&剱6h96FȱDSg}&> 8ݟL "l B&7bڨ!:m }O 9NrES30)9O䏘9DG'fm?T&4,*K- k"Ho K3Ph͘[mX#q ފ9[;[vlvՆ\P.(]jUd3'hz骻FO}o!/ey*3:9Nj@ȹgji3ÍM9d;OVH.ЬĠl0KAQ.\;H\Vĥ0ms7ѳhs%!f0q5IKƈ"Eq.iCX0Q*aY$VP9e5p hfS(I!Fc8$'m 6m6Evp.&lkl_dWh)xiE%20zls>Zu}%,(΂Td)c].E£Ei.qq^6*+D9ϥaRkjc(! xᬱq. Ihۺ$e1r> GRe+ D zRyNb|2U4ėUԷ ė8/K ..K XXcΔ圌IxaF'klj@>*@rU2Ft2&K# m"JzWTR^%IP"z pHƦяo㴥OoZ^yz mӋ=ΠDh~uzW]]xKv;J Z>WQ-~'c}*(-GHBMitO() gQqȧ;D߬Q䊥kbGRV|8`މBHMA0F'4 4qChpQFqgH49bA@zƣ:i 4z*$5 LH +y`\J8s1$ˉyUI `ψGscZik1rw".64t2d_Ͼ=ɱ"[}8io8'l]!χࣃ'4M\|_1^D&;4MD &ZiqΨˢeKlQx4.1uPNesSMv|?Z+ 7ښhb`F᥎ƨdbg(o*D,@'S69DS10l] lD0+=^`E\6Kqg[5_GlgygrIH `ǓnJtPZp{Odt9+coflT^!~ޜJ+8/՛ɧ  1)@9Y8OlbV,c>ChvSQ}hbZdڒx,+!  >((/gm|(ƜIQ8ΫԊEz%ПAt$n%%YLGeAɽZo L7^;=C{P;|>gh;ˉ<.RPSw@ U<: #9Z8ew~8_(AR`mg9dO`EԄsVRVS1rDE]抖iWۭ=u$cQR8^Q^LV3k" RL3 LA6[[:cmػGr%Ɛ6!D)gK4 xup4u9O}LQ­`UtăH*&FSF!g 3YJ >i 1MW赂;^:9 [ZIC@h_#/{ 42j2Wdd#̹zR;<V)cn{s̝1Ai8Z0F84(_.'JE1N%EVNBZì]Qq8 eZ|60s8 YԠ Z4)\Ԃ៼Zx£e@:q˶θ on'3>[W۶nqJ;Hu"Hqy4H(Wor @|#"#Q} s-8v0Q"RvEk_{zx:4,ygUupZky,-zU!Ԋ^s=HjqʅG(DqJ5䌍$˴'M!\w*RmDEĥTL$`/$ab]8 gRXfAQ1xΒj}̮ټMx1XBWk- &F[5Jo⸞V>?DYy"DdV A{ƕFy (&x L`e/Ew};iBf&Yz0(dmsi,;޾)iߨ-XJJۢ|gR!Ka8'odʹ5XjgIsVj`PHVjGbմ~YyFD.2`*f !9j\J 4G<lpb!&PS,Mz7QKiNvMi^O3Ϸȝ&c]?{<%h6݋`S}TN{.)fma u`R_?~4"h3嵋nѬLF^2+MPٹwCiT'ғ%TX4*BDc$$# k}xHJ4>7} dwIU;$OhCܗ,kH@fDL`XVPV-hi@+c DpT4aHG%J@2>9.h-D< b\pјr"895_4Z$iN I STif.L4Jm.P6WjBH-,.:$NBO9{ 1Hn)`Tid,Ffd,b+XH{,+^+[4̚qˉ斕V&&~:Am8?2bD$A)a#`$9EKS!89̜Q+ޕ=<C6l`;8˰ɥ *!FD&"KF"g3b(]Abf=j:'@` 7UN : 6#YjHZ4_TرBF͐ZўGXP D1 /XģQ91>XcWD#*IDA4q: a  3 QiNP:-\GSB$BeIRQq5D%Uè^EfEdIF3ԕ&%cޓ66휏6y@ⳅ\e}ўI'1 <3"F"ayh\Ҹd * )aoמ{yk.y ϔ0 Q\[:tpjַϖ;>mϩT?޶'B8WS>rE|< w: Fp vB`6e{D "d^Y:j Ɠ\c@9w* yT$LJ'|*i!>Cws|5yYT`Xyިn=qRX=[10hB, .XD H*-GU%{5Q}B3zZG y\R[RQ'(wD.R%Q52CrV>e7-8|bxROWZR*b:( dE9#2^ȲBrOQiR/Ť;$^W-Zdv[␕MDK)*m fa{YH_*Ynoԓ?MrJXjS <GA]29K)O9*0i90~œ6 Cr"" mSrD|NNH0AQ7w{ҭ=ij?~T9se4Qu R>2xv>_uŵ_b#{q"vw;?09y󊯫 B \pA~n\|҂h?`7H@9rÅl$H]-zZg/]r0.ZTsf#0.hu/ͫ٬7!&{9U+[ Vj35fIplC+gR9wշd ̞{67gTOpq9_"aq}ogt]]O?}+NWrS/NzLqIUe۳b6MY]!*e77i#͌,uiݘVs?v5rRZf>}|mɐkg656 m'_TvŃ/ 8o0W0WeSޭ8SN{uQW=k^иty~-ߝjZc^Y+MAmQ=w8^mffbK?ś:CFC oc%9$ᧃ M=%M䍣Pf\|In6mq+G&Tw˱3!}lvO'ʹ}} NW/LL,a~V?MɨfO]:q]aU׾i޺5[kfٝ//' W(pE̙}0Jllە͑j4t9m6_#-wG!m=@Y[7ڻYe1Dž >Q̫x4Utǡ+#{]|d[mߪTz$}Dž&rq2lZgKՊ~Dev'ȎA8zo.߿{ׯ^K\q#0>ԃIApnu럶zE 5kئk]EJ~rKh1f~6_·>V͉G>hZ5"d+(Tab~6˕?htI? h.@L+[%j }A[ns]D(4A>·Xޙ5|ku0AIw:΁v}"ە>L# TO>BB3ZvzzzX=7\=j\ eu2+,W0˩_Q Wk]nXү/{אsSl.M6vΏ$\-)R!);+/pIIHSp* gzo^Ne[tR|fMV2D0‹'ϋdhp4|3) D#"מwknVy9?-x+v/fvϥ8g2"Aa/I'x27>F򕹁TQ0WU1Jdy1{wvqdZPk|{SΞPT5 cMdr[5hǚ0x  \erUVҾURk5_#\ #W`m\IġUp\}p%`\*/UV~',SJ Rq8p=0RwT"2b=fc4˳\LSNg-fE|xWn-#~O6d4+C Ԛ2x;s1FɎoV8=G;@c>VBcvpBl}ǫ9}Xmݞ]I5I1pQ KmJ(Ș Q&giNE9*0i9 "_; yqr'>b9 ̓_npT''. 4I$|QoCNs (eё`0^[Bᏻ0>=G{l:^Eꩳ:2xc'N47ֿNuNVVNR*$/ob_ &POwg=G.Dr\)>^(mXZvTU]XL}Ld'>RBtR;mg_41SJ}̔ʾ3=ʉOU/r/$$jI5*BhD:%>r' !*"􄚊"4NScp@ :g(@,JZYíN*d':#gcBv_zRb!J3bZetBY ځ"8M 9+a9*(ɠ՞4Ϳh>f[I`Չ"w,]Pۼ(YN' ;Gb/>%Pg|rcѱX%lIezmOfqGWKmTk-=UEmg kؽyPw[ZOs4Mp3073i!r& jI5)2JR =}-,* 'z¦B2CN.nUDb$ Akd쌜؝vb! 逅K/eܒOaqJ=G_4(h40:ͿpĖщ hx0PϜ芢)P5gJw"6<Cvl`;8˰ɥ *!FD&"D#vglGl75/wـڃ{H*'w ƄENJFш,d5$Km*eւMB*|!= &"L{c?u!@<:QY1vFv<&Qƾ bg/"8 uIH8Ǩ"X$\(9@5"*NM򠀥3.8!(Q{rVʊwC3r#ӈSge-\}ZKE1.€.R:E rG T唧SE$($%xOx<;C1 lcFg3NjyҤWR]dxnN!+Dž:F@=ZGNBqBk6aGDf*d!(xJi@C$͑CŢN*]p}B#gZn=~?CKޟn|D[n L-ݲ~[z`10hB, .XD H*-GS%{5Q~ng2zZg@/ y\RG ޣOP$\P*g2CYr@4"2߿]kMqqROWZR*b:( +.)$x#ˢF =GIw&ke,WMp6>(ڳ;s Ćy?Ȏ.@< pfxM8a9|'46ѝCӸj~5xRs?B~|*X1l=/~BOx~18OmIsب5;߫f'7:o{QuxU!TOQB-DqڥoU$ tpʌ'AꢡQ: ؛˳ѼiQ,& Ku,VqZ}E E!&{9kO+`zvZ ޙbx4ّ 7w ,HH-ů>dI%?ZеbMON>+{kQvvc?9??fM:IY`[gKXZm6d\d4WfuMsUʐ#n?F^팅,GZnh]f}]PKy41rwu\5U s>?i^FR,~~[G'Di57w[OP^Z N2۹H}tFvvz$7n^\kwl6mj҆.gmu{&>#_kvr^n ocقMPz^wُY_Xuټ*p&eaYezϢ[5 bB4R hm!<.2N;ǩhX ('ay]YI '(xz=HX+u2\Ý.\:&UV6Gz^@ޖ86+8څǪXNaK>kRΫq3FhP9 LL.C 3 _Ma4ZygitH8oǨxxqr^E_|)L"%ܕ`, ,j!І@fl!=R2Je##-'֔7RstR:k֔z.hCLDNj&ygLD[L@ނ>سuʶo6o-{P1M;E撿|żm+1|$%#XZ*B|z1J]&("+P%Z3+a``t}FׂbyVV2UYլgu͎at`vT*L1 ~1fR]WӿpR1 ٥&R-/g0E#`JŔ2->/9/>o15rabQ2J,QR :9c# 2-ISHDG"Ki`Fd!'X77ԣ$ߣRRwQ&WQi0c <"2BIr9nob5Шim҉4 z᭭/Av*oKz V?{`#7 ͿY8=gv%&(aY2y YJo'=q`^zu{j*s˺jD{" ƥBst=7&-)CKm˅sab I2& $O93%kè3rFQOc n[?>-ASWrg5 6+y$YG\DAʒFO`%hJ(LOT#Z}p"HkI߽c#hiO;3T9g VU9"R'\T #2z0T(DFMڭG\h&6F ңgm(Cd.(CS0$t]3rv,DޘoK\kk-(?SrKT)8%_tKneipJgN8Z̔UW tHaЩBO):>D%8A]m9T5^ evn̸eeDɃLGESAhEB!JtP.T/\y m>yTӄppinpj%q% 89n(մSx^%;ъ 49ҫH툱ps~\GԩA0j[A'BLF9+-܆1u{ 2hpFdN ޜC s" .{DN|YgAv"1{z|jvܺr1۰LOoל_>LEs#5G5gZ˳QXhz-4R8 /KLټtv{I7؞ ؖ<aj[%+Rɖj JGU">R{K&NəRrNxxB $]jvm7jЬzcɻĬfk ꅸ[?|4 &5'.snkE]5~m?5kBoޜNOWXT”siOVwQ vGiQ,6J}✶4z@*|Z.MR_x{39}n=00WK/gg+ٮ8GzכI^OXSOS!#^US7%Y,Raz k4#-ppy7ed4W6:dSMcE .zov&G5U8W)?5jVVF5͉ߧ˽п>'RG޿Oo|z'.ӻz/oi=I %Iao{ <}+ rϮyaQ*hs >^>ژŖ[/ )?~vCS)hYd *z{qP;OyrEģcj|!U A d5%g/x y^FG]HZ *s@Bc*= .EBzFFT/FBΆʫsp䜍V2N荓]YP<3+M <^#Yk)MTLv1Vb:Gڜ؉ހmqNŘFvpPnLy0蒺Vui;/AkI`#a?,*ZhDޱVz3sTX]_A h?o@yHx2PԺd}:NI8ב $Rs+I>qtr+eK%2 `Z1p`{0{ |`cbFe1p8FB4@9;K>&%YΜ0@$Wb KBmmM-9#BE͗#*Dӝ\ꜟ/][`ZRUB٧r$lVjTBU\2R mk#!<0X#X CtH)>p.//< S+Sd(B#RtYEnruus?Xu3DK$+;I묃J$} mNH"0٭#Zc46j_`-֠EzgPl[q@+B_gP)u6i\)iJ` ]5(@cCx1,fr^6yQ 1iϓ#NU{zaqFVbX\3'XϜZH^ym!o q>9vh:á#I&\tW.0KXG* -(PCeh79[9RRd& <:L/gǝ:h8x'vpNdDd"WUdO L@, IB" ]wQLî42uO> r7=[%9s|,h)sd JakRfܠƘJYx"0$V@_\e2ruܑp ͞ 1BEsmgO%+x-h8&ګ]_?t". 6Em_n+:,K/!Uq5ݝ&>-HWkG&2VByy3!dBq\ GdBq:d2iŭP#/3 #\FDn Evz1" ݑ)lX#B"XL5iKQ2mM-S|Lv ܐV jf1EJ*Il`<{=|-X.7fY1^}!9[ei+rsI!Bm6Li*+&1=4NNTn͠{LH8oԴ;k.ִ7Y2~Cz{U ;w\yůD]M&H8Z^vR9J eVxJV9oH}*gm cx&ygmm0 | fet R&^d0췉 eIfMZ_dۤGF2@2ίFҢZr@V_qWdWu 9ꜥPsioptq#Xp9x4ω _USeX!P:#rTWC 0[ym>&u{QҠv̈́yaxsk-SrXvػ=Oބ+- 1׻kzȵ>^> o $3@144IpS1חI5YWD)S ޟc[Mb@=ΕC8OK3Ⲝ=}6\~i#=qϣpc[TBC;Cp\q)Od8ޘ K٪ dÝWH'OڡUeZvJq!rAZ"JJg3p!k9C#6LOlMw{!`/ ̐㪤==wyWϻg i,>:BS~_jDLB@d@ruŝr-BG/(("= \E=syma:0>̠D0a]&%VY Cܦ=pcp[,Fk29r=cgƋ9ZgK$5 2JD^Z>k hE^z1ү-eoj?LόaG}w{njti<ӦSΪNǣ0ߠJV<]SiٜjJopRgܗooF.O\dE#fU!|`. gC"=9b8dfp>feV$aQ4cs`u)Vb} {Ԑ#ϩ4L/8ME1eGb-|bRBqaP=8M!yubYFin&LC=lS}ifpqe2?Ss>fov~gev5mAɻXt"Qc/R KJ\͎[{{k<! cԯ|dόۻFHO 'cԉ641RtXWհikZJiRK[afҿ2_{o,̃%C9OAMu-Gϑ YlG`ቺTzL.~^)IeFAS pj+ 2t2RV 2$'”Si΄KV.kЕtRW%gOgVqlJ!icFr[XNZf$m < Mj:71IEY['cPZ92x Ev]h;[5qΖ'8Y=]Bԡ~'2h)Mɺ{(|E_&.9Jcf,>_OWKUp9P-X5d9VWQVb>KY5{X5?UѳX5#kߪQIFO"LȞ[3(d"XwmU)"8pdmQv8W1E*yNuυeHZ36`K3=5U_UuWWQCB >DkRp$zߟ M:j;l>g^n&r}O`VHX=6k"3lpօGh<(2jzyzK#(9<ӻq J`Q1y $^0srZn-!O?هd*9`L~>O3Eo4X耬IZEUC GJhS\{)u* ̂HB9U )˽)ױvFfA glŃY C3>)uwK?sݙ^\NQyM[|EE"MA0#°@14 T3 Tvw[L(֢Q[M*Ffdb5sf"T9;[3,3vB1 RA2n-onyz}v5sѬ}3M>"XJbpt @)hnrL *+ @SS<1lFÐ= JlR!`V0/2F @t:K#Ru9N~<;Iڽ^ f,7V Őg=p>#H( 6 elBB ,Cv4HRQpXG ;#g3fJx(슈cD="Hl:b&О2p.%!Q!h##V3VYf9@`nU : Ą3[XA`Irx\Oj8FaǟAA(NX8xիf_j5`1F 5Rj^zF&S/1"8!R&)#"b1h#5鎽2sj\N'R: j 噟`׉ H'M?ut DBCbc1Ӝ A eD[C5BsbZG3}=#ϗ<6BB$ѧ#`G wʱRs)4b dΦl! }Ҹn'ƍߟ5]qK="T*0(/i@"qx0@R;Iδ:,:Rߙ/{oEnkta8$chQ2v3uu^ēWnnT%{pl`G1c1!냈` 鉵80{?F)38r'{J\y.EhkFNmM[Qپ|K;zWoҽ`%tDɸHivi wUqRH&6rPuJnx HK{5 \0x'kx6W |Rq⪯d+j,G[#*S,7$߮D'MQ]^/(yXGஶq2rL%'^luپYYFrV¾;vҭѭ#)߳DOWoکRn.&Kڼdjl]Pޟ&U }=qfTt1;OP}$Kb#bԩPBc:+b}A>B4`-5SJ{V (fcQ$WS˃*,2TuB+ŤN@/qgYͮHo|ٻ 8F3-4n8J) }GJ!|oSМanvd~0 ^AN j:baL95PF}\یҽ*HS$W$(w;iD'5*Cz`HmOjRz\TkXe[ ]߽.rsr>ϛHP&Iysh;*O ggo*/ɇqh+2a=-JyZu۾EZ@0h4ظk̷Vr6i]QJVXT5VS. % s&Sf ] m=mx3 픚<*4EM¥dz0;?N@&޿IixO8o^}0j9/90*J0O‰2(Pz5/eq|{68:)V \f \HzeaЪumWAniRPE4uRX^O?ˊ(_~Ry#1ꉪU&(t<+:(^ iZLsE h:V6},7iQ)ݧ^ #8M?\0sl0^b>1i2` R 3 Pv~Na{ gwa}-d1,:7jN-Cӏ:(NĜJi@I800mtqM%tf'rBkBg"whOwhΘڍZꝂ@4Ȝ(I$,!h*C \+i3 0@œ+X!ap) yi''_ü9]Qb?cqu)bvzBs:|Bؿ|a`>թ᪸4h}go2 /.W/ BjʼnL V˖,<vׂ`EO_'S鍓ɷŵoZe%;/yuŪav[sa>q4<;/ǖخ,<kwBm=RƊzh놴vI{7rR:$S ixtUlb8_qJ^l[R52xe$~2PLb`bR{[/+wxˆvᆗ8z݋o_~=}~~w0Q^|q#0.m@IAPnu?l@>k*Mת˯|~9[->Z]rQa;t >owlqvSU^#3ɴB0|cA(FʲB Bv @ Q2ӟSyBn6;s `6<znY[w8EVY+:C>ft82+' 6&ftR@GݢȾǚ{Vy}zuitxR) Do (Ν&+"Ӛ,B <Nhb㩳a 9_f;PSݽiS9 qTNyK]*Bs3NhaٌZGƣ*%|逹+Te bV+WZȒU=s sl+\5b;b{J %-"p}\&$ٓȖ%ANxƞ~!@ kX ~؊nnZk,]NO{c1i+̂f3Ia :;*Spx׽dy -@Vl g "\!b˱R] iuV&Z+k,gi`]GX­*1%D3TI8]Uy@+qn}~oDw*-2d"RD 6,eR* HE` -4irt6]*הєj.AlUdYLvmwѓ[ȰHVHd>TpBAHi>p.H}/< S1ĢPe+RtYGn)jOxhg8-#k0dd${T `-$)&!yult"59Ok/呈lΠD@c,]"F_~xExiNJ:J)+UV\q&q/NBɺ c0[M ~0`"sc#-,cϓ#%GHn,}s;jh7CGRf9ZB3f'n ~"~9˝mߚ~műGGOϨ =_Eܕ*{^H fk)-k_Jݟa~7ijp|1:E>,aߊ/˱%$sɀ==28ι7&FR:kDix @ɉ-![-63Rk..HRdYk_@͊CV"X c1dὡ7㰝s;% ]Y}bx+luTe}l>BiZ7:0b͕tM;Ih񽶕J1W3TiGcƵ^ {q/"ϽB.{d9&eCI?fPq0EM>Iɥ: d۔W\+,Fk29r=cgt /;eYdK /w O$gDRE"K/}ɆƬZu %7[&fl[ p>_.<>ҢS\N.'a@I:ò8PO/Ϗ/\LGR+pBԲ'fU)]Q\X]G~Gг59$& ~#0Ǔq0|`$nb14&s`RР 7@/{[ >P?7ml1eGb-mI37,)hP1KQ p0C(K_gWҪo!Bgg) N#(w9]vn.s_=y4Gյ'MM/i[bFÊcοm&.sͯn\>6s v!K|g#vEq7.nے#n-1mZym{uazOy$G.:j ]|ku؁G^yRL +A4o#.~ܸܻ]G;j?rA4e̪ͭd1":aY:gAF&CY]ŏ){c.7j'NE8& i!)}`N9ZA]nExrc)7A_T; iyY/ r{\=LrxBjW;w3?!0c_@DZ_9ЩRli@*N;𩣌ڻ:}k&/rTc:v* N^]nlnwta~gsiius)ϦS6q8Z^*p*Z c `VyÔt٧  cfĶϠsm갨]mOa€Aevw2Cm:ʉ :+TVT{^`]%C@9q;o8_щ#-яI琭S0—D0neJGbx*ΐy>󹰦}:I=L9>+/pZH_U9kaIDS&kD'8x9,1,5Q1`uwgUA4OWdt3錳hhvdUWhfCcw!{0C6W](KpN9 J WY%C똲6Ғ]H ="B;,䀮WuJ-+h: kؖo޴8;?M'ӖfJ;V ua 1D9TB3FKM"璇H y؊M:jQՌR-TJ8~tN U*W*0[ym>(M\wuH>/ƇtH!%PF[LĿBtM2*,Oۨb"ΙʲQZ42=t&ڲ^iNϖ0_F-A g@>Ɓgn_־#kN"f^]$dIvSz6Sa[ b0S@,>#fJk φqy.̔n)%2Sc9C )h@6Hվ"z +g O嵋y6csG%]~U['ofNVzҶ#ɿ8.lQR Bx2򀺻wίnO9+Xݛqyo7[]oms?0Cw]mxԓDbn!XpwᲈEPéis,1wOdyVL9.ϙakZxJ@IbLl"8@h>ީp5'qsLB-eEwIΝ-f&Z*ci)h+Or.&@y}ߢsS+s>儘֕U|QrIT0FLW"WNeJp,iDr .`2ܪ{!]Hi6G߬L/y.'5$":w;KdG3u RO2%vbI @M)I{Y,?$Ijm^Ulws[!ܻ02㥱8›pQ 7 .)X殺2W'.?#M?qZ_R3?\|krJ}z ^oaN(қ BVS>x?$S Uź\OTp$"P=%؀\73`2%xvNm56%hV`=w]b% 2 q7a8Y?*3(>.)ЗzS68w_ۛoE_^fs}T cXNO'ayX٥"%'|%}|=As{ _k?+?^NOf[fׅ1L0gpO'J`4|9l[ iQ=)}sOgm݈nhfu'i4S)22~dl8pnث`{?d[mϊ .zh4בұE헾88|0n}>uQy6 GAto_xG\ѻ;z[Zq@#0Mı ̃E\~޾|`߼kj˧ j#{]}fu5Œ[+?_~};NCSeuV [MThMWHlRfq8YT*RXdaGA ...&×B :st9zJx-\"υHTP({tѝ sW>9Q}=Î^NqBox%yfVr_*;x(y't0&Y~Oٻ6v$+z/= wnLdbuMQj1> .()Pz0`U9\Lݹw疶~@á;;#Aq-)X5$$ @ΆE܍p\sI5mS w Hq)M#7;J V2gd΢wT.$+x PUH1m!ŠI}Q ڟXeOCJ} wӪ3ߏ-V/UU4=j2]0*xM)[G$U'6*PK9/Ufd,Ԭ#<) %'+¤;+q7qv+Z:_(sӎ'^_]n''p 7t/ߋa0k-FY-w=wu8Ncl^nSJD4^40x B*:e-ᨼNuK$ścVk$Ihmجe%Xk!-)7fFx2Io& 8;dN#cf5*$Yζ-Evg7J]ӻu:؇?̆btF3]VScïZ&.Xj Nڀ!/K񵸒찹/aG2hhJ~Vgceev%tWLFj(ѩ0VjC?^@,.J*TL-43"TJR2+bd6+m`:_^jdeN Q8W]r]6loJr1b +sб3ұ-ʇnW^@[,/;w܁@:Z|!1 kf7x!iXxRO"r\rW_={!h^IWcvm s.T1ʪ s55ͮԮ^фT@aa,R%, \gNW8_3 27dDrT֩i>-+6?I3ϓLIp>T3F{rQ+ǓҘ\-U&iwD }]U)*[)FWR.FTȶC6aX(cc69T5IK"z: 6IoPn1omrX;ec=>L'Xq1Uv$!,=Vlxz9!s1j&_=E*48,-P7Dg%FP)Ee&m xdC0^k}IhY GJΤuōJc1%j-XA l2͉!T5;/M]8wlŚ>\Wi-iSj:0OC^ 1O1Qc/#>|O?nfeE[J<|bj%W:"?3emxy&ҞQyc^Rɓ"(QDrK0KR; C %}9|)XU(XJ&PrBU}Ufc<u5Ho`&MP<`d"3 KA_.0!C#)zyVK֦-)&i|Mt{폋|5/: ]N_$=TF9T78ro]M0 Xp"kX JF|Ƈzs{&竤xyd fvZX>E#f(xNM r>|Kwa~ /~C?(ݗ-8O8[RzH.]7HpP!d%J|?,cZ[=1aysSu=j 12[l%X{~q#Y 猚[[ M&歺7COc}Noohv| >̛76fWvGmRL֞~Bnmz˪;YwhK~nkjcl+YقϞ-go\]M(*wpyrCҒ m.Y2['YgOoqEFސۛ7*Zc ͪIt$"'B;]%H4V 59'­O%:4*]|3]4bh s" Tkuj̏_ǭYWe/?e, 9Ja!`+PW>G ?Z!Q7L-9ߣQr֟,kXv@Z=Viߤt֜mO ʺ .B)ʁU-tUT&>j)Pc*[uRxm Lː:+j7qv+jQpnϪL<.1xν_PoS&HY31xd2p89%y60Hc)%|RAZ8G`8²&P &LNd@h"4*2 )=B|g )gٯK0zN宿ZhqeX$ٓgO@E̕ %%#fZѪpnXT%D,8QFe-3}Fu:B sC=\DIu N޿q1y8~b8xgB9LJ&c$2if\T9j2r \rѬ~3~4cL-lc P.>MnѰ9_ī޿N,M>aܕ l羔(~'K)<"\$B`>z<{w񻤦6Nk25M23x]0_x/ x&ӿ~e g?w_|}|!ޥck]O$f2>K=_K5$WL4+)~ٸٚ^|u1Z4[Y4q@ǒ8'}ft?UCl{_S\i~ۦm?> j9oϦ5NlW])|~vVVB+ERnWh%#fruxদ+mt:)Ů$ӕjXMvf<}:cu7v|,>uߜͨ.+{7ږCj9ۥ-RzhfJqV#05-%m*uќ6 JxkTk9{:XeҼ_^mu]oxف]H׵UG%֢I@ߔO'7׼ӭT۶w~Т ?;{>Ç;]Tuf~jV.ufMIokS1 ? G){4q/zæW[0NA>Pjh6[6+8[L~ho"V:o[w,X_ m팷b"NMl*S%>XwƝٚB1q׹v5?l*_\IR.3wƑ.(Jc4 -\ 'm!&J,gFf_Y>}~D#T!1 =߫W Tbk"eij. dP9G^ )n-ʗ8(W50W'|~q7?︼pjTp`A@.~>鸎Ӭ1fv/ݚ:llgE#ۉ~?VG]g:jCowX8yQ/~Jo'+ji[E9좕L3 ]n@tkg˾$z?ųNÏ mq]b(|! q: ^^S<ܶs8}nyǹ5osZlXY?icwh4O3/ +V7X lg_,f|Oqo1/˕m4zؼ_,6l_ˋ;ऄ> \Uq%;*pU4W_"\v(V5R娣W6:mb-p5`S&ci7y47Asuy{*Og+xnF.'.kUfg)_G `b@A<f>˃EĿOr$ . {r"~~{aBHKGo\2j3 !hY9A+o sgdd\a'RpBS9>(ݑYrCJaU\E& fjDRė&Hp&Uغ6T \UiwjU@RtWX2}0pUPJ\pU4GJ kv@pUXpUŵP oªGvE•LCz8vUŕWUZ%Z ++ICjR*:UZJiꋄ+2R!W :pE9՗WWRTNn'WV3UrXUKMw6&i}b m\Pý UjRϿjA?Σ?-Vߜ]lEpfs|ӓ|h&[DG[7?a~)wwIk[غd7{JURZ.X鞦BW#u0w(4{Glc.{;tˀ5/uw7'SQ Ƚ,Zv_/b .{Aʖ̵+D,dQ:r/)MZ # P]v_MpO9uiSkd+qkQ<9^trm ray%E2Vj :(ĕxRuc5\wM8| ndхG)OLj&PƇDLRrK9q*%]v8?kZZXcJkr9CCe6XrN%R`8Nc$g %q_w; BJ9Iqmݐ.TLA Z.q %QT\&!2'+&Z/ܻI+39F(JI8E,[Wbi!Pfr$>׻ 9 <2O&:xH5`C#6{$ m <0Xmys ĔbMOu'*2 `6u:g=D 0stG~o|5G.x57k(Wچ@.,X<1&Ykw{WN(Eb9J ͊truv:10: z;}mv*zۭk!`c@Jd~HZR Ԙ2"P .@H`I$h RS.tH$ 9hPX/FĒG! Vx'ռ"Mr3XL0ZĴO2S,gȠ>sŤ / Mݿ ghLrrHXH;R),@X`t<[ \ vj9.8TÌ*AI3X0QrR`yUN+q3 Hty<qL *L:Ð0Ėqk ֐IX'C$gH!Z:e#*0 AQ8h`)4jxÔJ*D&hFL*E9`#/^8$, Pt֌)lA? U$PWMYqb! glP`&d84Z[$`<#,\}f"25I&Pbc*gdPX.-2z]rRq<|3.cHa 4okO),䠙`*+uT,?om," n &N2PAak.-n02?%( aN&FW")(ؙ"KI( p) 9+L_@JN 7%;b .p3SQ TШ=K!h~BX맔PZ.!U]Wl ХVGe] %9& Nx0 ] @^x J1Rh jnܰpfcuDԊbs6VDqFjuv݋zsuƬs|5bH.+9q@`NyM);lݙ8p8YaXǂkVPNVk,XnSj0AƳԁqR,}\wLuV[(Pu*KZL5tp1 !նVF /UXz5!<)xPiD2+C`$ (g(LtnXX2qqd/ eKineAq <q4nQɂNe~x}2hEvU؝bjoXeD)FB\#AIt'AӇxmnz-ٝyel`}uփFX7=~. ŨqĆpTDwZ8T:ؒQ, E ,C>P6:="Zέ7r#cLu/a;zz^W[kIQm/K[%U,)`T:y̓Y2b ӂ~G ]"赘BPW]b m!QcF`1bESN77yAa8ReGVTc~XT".ϑnDкrs$Q%re^[,w]ItP), Qe,𔶀OdPrzgZd}>(DcWZPB@++ 'MU<ƪyrng"g#"o%fya B)JJ")jcGQ2Kxaw?)@ L?=JWڈB8FQ] @̩bc%Vn!H@ZX>Z)ɗAޙ^F!e D [`vpp=s~ThF}Ӌ7[Y @*M%U|ȴ@n.NR 1MD1%a"ȝ9>dL(G=zMWH=u#m(EU+QB0x4D-uU^*ӖKC$X`ˡ,J< YhҪ,!d$B~(2j$<_"5e93vEc 3UpD( NCG^|܋[׷#k52G^(3Lie1J^C AFf2e ۃORof~ǣ|t7~@zp۔H'Qr&s=!!#8RqqN JKv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';rfJ7.BKFsAN & 6hNb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vhN)9r*; Wd@Pk@PNE:ٗW?; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@o FmBw\Rq "ѱhN b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v- (ċ[~u኶Ruws݆.lݯǫL }Hɸ.亐q jq Qdқ1.Io%$2WGj8),׍kR}"!\Az?HF3w\Aq@\)cq B: W H sTEkgwzhh_=ʣThNi-+n[xy(*LY}W_nַۏV LLƕTgBIsP>DvG)x{~6T}ɯb[T~N1AuֶW9 }Xj/WWyywgOusufiC*mM6E3`)YT|O~rZȇH6sUm/Wϯw%vW[ot7lMXy+ˢUhQe[!֑'L^cԑ' p :"rWoUa-p&f}Q7d˂-iJcccB7cmjCL)bV"b=ҫU _F9<ʇrmLr C:؈Lk]d&.ydtgiT2Ofup&hB6hzJ4B= ,20[5$?Ź *aLW$dpErCHW6"0+t7?Hક̛0Rf ʨW ĕNjW$8 B|UA"z7aUA X HԖajqJp}Qj>#6:MW6 *-~CR=^ Cg U Q7?UQFL5ouv}s.e}/l軓_ʌ,TEt#3S!2Y](|;Y>$;O~WS9o:{?/CߝƹPu*iohk/khbu8t8POz. ЮLmOwmѧ?ZJ7OW]7JכZpy~KE̶Wt4IctS?eZE5:Jyi6_.MӿA_3q{O1gݤ5 l:[}&ꛇmWh<[U7翀ItFm7nWպYmoUn<ޔ O//my۴]%mk]~=ޡZ-ՎwnUh Q̯{_X}0ӾJ&|P Wտݶ jVv'^X'w!=jv\0UږGI˷z1l:7B>?7N5"'C㕈Oa맢v}Ɛ%"^JP1=rg;} >=#ro&fx]<~\A  SmR썊PT\ws) ,/KQV#ջ>.l$|IgFIu;(mTEQs`w<=J;.6 X߹ ,\G>t\;fe]mެ׷Y'靟COSC;'ǿmM 6\-d}2"ڭo*ͷvcIl?8%cuҗϯw+2WsµO&܎'{oh51¼MzC]^}_UڌVyWmO5OqWu gp+caeVr'wks^+mrߕ>2v^~Hzvɧk ˄ʡJJrIJj:&|rv޵d8̄_x ҾS%5ϳrpCUzS wxvNyz?h霷̧AxfG/@zh%&rRWH NicL=0O{8d}g/"g>bTWd5\_"[U]S;/SBI&g\)\*9Uz9-9g[Nf6$+v%+B*"W+qeI NW$TpEjC; ˸Z >Ƅp+ȍB+Rg J˸Z"҆pJdpErMW>"3 ]*!\ApT"\\L2HjqE*`\-WQFo]B/ݜ$& Z?{s;ln;=^ >LWƉAjh0zf\iվC/ Z[9W'0L9u8Hܮb\ 2+VV$+qJ={W+o].9Yl&%酾2־!S[;Mt>sir7:ci:hq?L6DٹG2x&MH W$ئS!>\c_|82H˸Z ^pEzFrHWtTSD\9CuB"Q'+5"+R;dJ/+o\t)3 e}A"]qL.$`kH5TDpEjkW`3h1!\Aw2\ܐ.n Wsf*u\CoNJNa'] ĵajmޅa*N4peW}{JWtq: '玫Onb\-WYk|B`}2"N+R;陃TzWKĕNJ3%ό= !W[vN;*ROjX:'ezkچSlI>A 5MS߉yA:΄~՜М l$W'Sb">Tz9s6㥙y਒ZLٿ#_-Wvpޙdp!֊TzZ Φ BX" \Zo+R9猫Wa\B`'e2"ʤ+Rkf]JqQ"dHvErNWTvR)m>'1ke2oI}T:Ÿz3=ޞ(|ԓj.'] RD^a*pe2zu6!\Ap|0S>Lsq@\)Bye0nԙgF-t:sGwݱ_2`FH(KNhکj5T:  JVfS :\A~"3++.h&8dpGd<\ک Si#jrD'•VtAȕ2ٗIj"Kɝaj_ܓW ־}R^ܓt^ܓZ;ہ)WU6pE}:vtB>$ \CNb&M}0abR0rfrpW=2J*:"S׮ra*f\-WJ-tB"G7LnHWPkqE*a\-W+=sӉ< pZji:I傖a2! Z\OcM0=ke {fsAˠi-_ВCF9 KRETl$WTr6Rs6Rl }B#.u:qkm*">W2zqe 6xE:"S!>L?W~gښ۸_ad$/CvRnRQvaR͍J!)>H "Mr8 #] j2_JECWWhʣ ZB+DWGHWjmҕ,u4tp&4h JJ1) `ɣ+WXʜփItePJ3 `I+WG Z]4=|=t%U/O>\}TZ "=J&T#IDt Е+P,te*:]':B"6}X>AHvGl8)h"qR6hqRfY&h0bz~<}}4E8=O&/AY3|C:A&E^f0L>gd8 1n92"'fxRL$8AV $' ƕf8"b\o%Xʠ^7ʠT)#~tQ2&h>]BW@ӕA)SFJXLqi<qGCWVzpӕAZyЕH s١ӕh$(%JIF)0G,jNV]JX#+-w?*GCW- JI]RW'Haq}$۾LW~pŁsW~hR^(9,wx? f'jht"TSYy-ørKGLI ɰإ%#zt3{b6h(Wl1u 6+U )m2q|T ;BBd[`f&O/bK7SUpUd` dT55¶EW⹦)oY ;ш3X#%ȋ8n[^;yQCZ pE G\~7ߙ/y˗;,6k^MS Ӗ(_]+B}ǁ\3cA6Es2Zo-s 񲱴.p~hkvfU7D7 "jP8;m"QxiƐd<-\2]IT{un/W BRqV2amRQ|K`\vxԍ[ N׼qylqYΘ=%e 1@ޒH$AF!ޟ׼%!\/RRc2_5o(ۤzKS\MJ"&'hהp4)++ZBO R{I ($l+s` Cꇒc+1D82pi4Uv Jy=tUOcǛyVڃtMU͡$"h%$2hwCdw2(%ItutEEDWPЕD,tEFNW%R!(:# J\뵛ȭqP[>9]+V[O퉮kB0(˵t^ 2Xt>JVK͔ӱMPr7aЪ @ NO&Л`D)#+AƶѤHNWe$70< e~hIeDWGHW q^^MM,tehA@=J#]I#+|H?4->4(Ӄ+`6ш \c+ʠT)~t,`ttehR P*WCW4/,āw AB3PtJms_rjyMO{LkeJN#;ʻ{ _o;8(#LAY^! 1B]ipGFq묒Zfaj6Q9icOl)E{վǴ .J Xɲn4dfw+0t<2Yߨyk~\xؠd\gWo⪛`Xy ntSU! [~[Qk]Iˢ&ʴ,+"95a9dMYRr%GuD3E.x TRd J)+T1Թ9j2Lg5fHg/4aNB֪ڢ|d٣/r12g5T~|W503Ř'PeVO@bT 4&qiLҘe$7('qƉ Ap$TJXre] N\9+Kp˂s*E$+uF ?C`xk3]wak{ٹI~°1ͮ֏>]U+r9mW߾=|lE{?sbWw g9|d \T|S>@HJEJs^c1/;B$;dKɚKƨ#g%A60cl3LV]Jܤ!ݏfP/ G] 4(aE-a gfEU1Ƭ4'ˊqS\纃=*.,Ǿϗ~ۙ?~r<I>6o? &^?`ch fmwz[k.Ͷ \>|=A^.*eYU2Z+v6?hz.?\1=, =uTLs +) R'miO\dS\e;8I1%hVMR&pI8><gn.XZ'ISm˾)%^OIt4^iWYZa'$ gUfN%k"8{\ٍ_rMZ5.?brcVaf8Rfw2%`!z+j.;o[Ļʺ"j5ܯ{iNB(XeGcGZd*F~._W۫3 Lfzr375=|g`S v"$Sp{oEA'aӷ'MYodԨ5ߞNgm[m>_BӟTX\L/g8{P 0 JHU }8JcMKzةB*^Utc#e6b Ksulk,ƸxXF$@WL5Qiň2d2#4i9ăv޾Cnc c mvT|KMp4΀`Q33dR{S/%ke`Gus gulkQ)c{ qxO=%T 1ǞF.2\JZϕS/:d 7y֠}V#Uh|V#6?.cc@tYchj}T!DeHŲR1qlca׳2?+L3'&L5Lj<>^v8C'SR`gӇ|r os+. /JV FS)c{'Mn_{Ʌixcj&-`T0)0KɌRŽE.J_ܱ4GCE YumE4>xE"<SxGm;d6x(C/% 곶v}!JMI[- ڙK%(x+dU[*r|cԚѶ5|0[nsH rx&_՘|? jIukqFy_za=%m4|)Wr6D?[ӝϺ][|nesFOxl?_E۳H0 W}6O')E `_Z/+`4+QQP3M5H9'ij)%U'/?z2|>K"a L`KdoWwi١=>Hjr;)Qo(ZnbPW M8SF 1w6jvYr YJLD{ɞ2fvg4d鮀]<`\E0Dڨ>$fy&tV!3Yk^_riY;YIVicLe,HtXWPAv=ԼA]2fXfXWF+ j^fy{e5/A٧":&ixM!o]׌/LgxKv4v2yMO 4‫>RYot'l&2q톛)2ޙЯE-П(v33toߺiVBEȓh.bq3`3`³8aL](f`ʩ]ӌ/:zjj.+f7Jv52Rɰ-ŲԼ[3G5:.9ЦOu&`h"YCy/G`h)>Wf[| ѫiqS啢#崦ګJT_#01M ow6, 8f5] _T]R]s3-/b"o"dX/NbVQGqvz0B45N,i#F}֪;!u? kĘȸf|i*GfibR}Ju~s1~zWtNJ} gl2:Dܜ_T/z-3S<[8 ³ZY(1y ĩL_x%*O&做/)y03Qk4>llaI՜ ro+ۤw@Iі?.I"9nj YVb**=Kv<%[}C%7/|nh9DO$jAiͪ[Pnj6v/b,b32MF:[oݐVhLȪŰ +'3X+e x3.iZ|_c8#Y%Kj(RbLHDUK+*!q>ai]{d%waD1vw]_g.vki8n)Nnwp<eaBGy &L(# N+=,K6~h4Xg.Q1k.[o4*8o2xҙ?P=4h7e%e$"&f!&ҿIsaڀGY=L5!SaH߀ogx}ZT!Ȳ&A\X 8L'Vҏ!hxP72+ 4By ې`M[Hvƃdp ;-u\j2쮱#b$s C@ɃKRF4/H팦y:_` JAH$c8[ ސC]D Jw!7;^"Yz9/f_|Jld#2Ht_90gWio6[, PE3sNbRCg1! 1XYe%xMƧ~paZ"םh~+c'6\LVy~PPaCWXш`GU9'l!Ϥdeމ%"Al;'aVj a1oR|J$th8B@4Ilp/*_]2 Mp{J͘Ljr"T Yy oEBr7(+;\uøy(Y1FZ$%0 /h7۸6]* 2I^:)Nd3Ur+>KE|D"#V"<81ōi۬}`"@&<=#3@/zviu ,ܮvđl,| r7W saCs`&OW T!8ՠJjEtv@e&78`:P…=*Ͻx6!h"TFa; ?`gFYm ܭKZXKp 3B0o09cm>p|D [–y8[jY+Ei{р앳/кu٭Y\lhp Xhr|6VUTv X'+$4zŔlOFՍԧx֟A$^9,z&rn!$"H14*efTj5mȦΗM" A n gB~E$WXӱՂkP!Xt7ZXLPN[^ c$G&() quq٤uI=q%4F[V_Ϟo?d\/{^C $t;E谿Cv!j\1-nF: 4ZNow|p7ș^˥?7n)VhC4v#WI҇Š~Tҧڼeч>@PGFe*p @a7w@| enU k?HFzߺ-9v(]̆-J>un+E;^.y~D #0&IϥL܄oz'Oс%xE. -|@$&M"7h`+ZX:9InmDƅ E mkSz2%1tR #Z >.Pa=s.%3;0ձ7Y9eH6qݘ6xw ]ؓ9+CM$Nr&IXN)m%GaiL& m<]Ŭ2qcXrX#\7]z>n7v%HY2Bt{\v!nD &cPSy㴶BSѮTm*(Dd7:rCН=@4ӶyPo>~  Hnfk'lќ-]-1ÇL|T )")e"eksA U}.toc:o #zΘPL6 1a xW㵲?5*v߰$&mo:db7`M>s: J;޶y٪k: jXkhEϖV6 av. N/*V"cg4.S;Pj;&=4KXwZHEdd~;D z67vրͳsN,t;V1\Om.ᄃ[šLlH/ܢ@! TD7N>@kwI$;9%qRP6?$!j+sy2;;HE@R7@Y\_Y y3IE+}7bddKQKg؛?Ӫ%MʆQsjp!nN/&dπo*V&?3*s*J {՗/ 6@EEu>.MwvJ |uGUKTz#23x$ EzG08b8tu]}XYpfNG=-B }[>8N6Ɠ$-<"_Ī%HybzZ. &s,V;)DP}}ȱZէM71MkI{ɠxcZ.ި* @kmlTR?r5](cPbCom[sMi]|xLd̉ C@ɃK K,2/# ͹?ZYVd垖,1E4^}/y rg t4-Hp,~7A''0cL 'b ?rG3ܷSoL 1zǡ˚%Uikig}:i̚SʤAQIvkWN3!=Ng ao{#[Ab"΋c{+#b0u?ďN>`gsbXZPBLbI>ն`EҦ|jn h 펊dPAa PYt{vn RC]F܍R=S/NP !0ډ J(.vhFY5yk7(3i;hx^~ =@0´0{,\Gv[Зr6`G{FZZ 7l 2NK'mfVeib[91r@Ej}|bT\x-:OXH{4Rg:)&$ʑh'Be[ZF=&LثD" v7@zߺPxwʹ#YdU7#Y0Ν 5Jn5 ȒDQĔ>wZJ\, !tb^$M'h3 hU0XtJi*) 6Kks>M BW=o R֒%";'fyO>q+-sE;O)Q7~IfR~IJ-TdIEQLf<"I9cf4)%07[ʧuŏC{Ξ_F_e1}-8:q;yr0gG63#uv@0?\C7?"ڶ[w"[ GR׳踑Ci=]f h[b fk BszM^u}i<:c.*HD]Q')H.`s4]`o{05 $A|u즭6ގn*PhX<5|d!V MLP;:dF#UX.yWm<ҧAA:Iłv RI"֭OS{L;\&b3}аN ';Tm׊}Oc|,Q!nȥ,wc _6+~|ۡ0*apg6m"s_V[tZ(GyV#lHSvyV)cTNL'ǓfLיJZC??&m ̙ƌKMi99]\HK] Y!k|,!7h>eh: y Vp9-B3S!>4fNqX+'}` lҐ%ssU]ԪVw0E[qLT^j'S%ܥʼnC098is(A@peQH~g!=9<#"&d6E$fhNU%}N@ S*/kAEbWy.(DS r` FD%sc%5xNdB 1|тT`.Eڃks p#L`ARd,&-)F{ %){HZ3ٓ Rڧ=AFGI8^E) XXJ+ L k[jE;AHWNGUߘ6>_+$VD.AҀ1JdRkB):Bd扒!h": ޢk} 'XJ$0'r9/өKy3uU=BKu9Jؠ$MwȑIqFa:9Bt݉@g p@*8(D[\X$8A'Ix0ks da8u/Wa!T a SR4Ҙp~ GK2`BtTz׀FD;iCD_u p瘠%+G̪) .xBâ}UDܥ  KVP3jV1pb;8]aG #!d~ tX^k2Z܅߫A[B]b,at{>;z({ gv=V*1D{gUI2۫hZmer_fn/a7Ezy) gf} h1}pކ7$* ijx=e "b 7"?ԣBPv!$CX:8Rb0'{!rw~jʈҔDbS 1^0 xVD)A $6)lQ'o>\}f [?ÉF;u:_qZ_~'kܿ&?>ݫpUxt` SϻUQW34h:@Omt?qуM^/XͿnjɇPYy& 4YQAj|)0"2 2`ɇT LvŽ˧ c728 /}F`o*n?t h]y vIa̧xyA k~|ǁYuVtL)?͇?fG}Z}$˟8s,V `{Ljw.r~zy 0#;'%k*tIQݽ;#RĆbMPT"jp\Z܁3E|! !<:(S]|DTE2!c!k"M#̢: AB;0;Zs$YICzGK"t"0K'$9`r> X.u03ⲻHqHs8Aק%4Mb:n8i'stM>T~.Y*.A/xĆ/Y J#k@PM?-M┥h2yX$9Ch m3ݔT.Y:Z| Hh3-sXܦdާ$N8Yty21PȪ4BYK8\hTBcًZXVX 7."s)Ezhr@ыrsȃ.~Uݞ&%뿷,ۼ=^ATHdoQi/j$,|deqy4ÑߜY|($Ug[i{j5 aŅ0!-K/z.c9bދǢo >=T3r -PogR9H=(0Ru,Q"Q+;J]^ z=9^sQw8B cq>N8NJX|f$eaxZOBZpcyC>\ў|wXşo<ԙFϻj(9.v>n?Cze5/]|qU X\\-.ʜvZ5gffYZpW >*]M~|~]ۊx^u:h}rQ ^t޻[g2W{+VmI&zy~|?(&Xqf>ŪeosB$1b$R"Mq ߠ6NyBH}b'P "vc{%kc+~~ɼ]R4|մ}̓iJh B%[qN@7lm|n?KDc+Ta%'q"ixu~:evMZ}[eQϦa<2Bv~jŨMt_c=]Vp^F<.ER4־*:R^v8 xMb4MM2GRsTr{'xU[k~J&nѕ"ܐW~DOYvnd{M٦s 0YmTUUPdqdarLgw;]6~AGupvfaeuΑUUI4Q];s!CQ6:d5?Gr|}-;L,;J(nqy,YߴB̸J?t.?{f_|_>/[ | 0(}$~|f@E܆֐+Q[-QrWǥɦS ߁o4+7\9xhO}^@%BDXphz'련^`;؋SJk~\LOh뱓"Bd5Q1fE!F!m(C. v [iNeD8o[֭Rp?Ém JP8$#ʹ/=NF2&;B~a8?As 4)@0_6毖_s_aoI#<%/o{F.8m Լg`h@1ЛDDícg/'Tr,0`ow7l}p/d?N&sn0ml .l/788rIq9+8@k Ɇ.xs>8[dp`V .~բ:tQ/۵#`7RڒN+@"4Ό-y4%nkƳ\Fˆ0\|-6X DDM iji1b@E9,1f!$ ˱_wRHUB6%$#&#3" 6>Y"yyKŏO!!=XKb|x7} }HZj"?AC9(5hgf8ӌ=⧗/bm9z,l+ֽjSB&p,8)#&Z[qwLº|r[u,Ql=ldSI-6$5կz.hyyP`̳JXi ~78BD֙]LI}3X5Tz*S󩇪@:A<`_P%u\cڢΆ98וsrX/`!\r*IUq>{"AI%4]'Ù0c:JT'3Q\!-r)(e`bz0ك[F&v,䚣 3)dI* @NJdJ>Aji 1בjb3,8I]Bc#?qZp?uǣNj/8AuQWP4SB'3h)RN?:¨xz0(#FxDI upad[qWl 9G$R"&DP@0~%4JrEL% )%Cc|@Nۑ :5Du  #-yhR;Q4ZtZ h}vMsA[҆9Ր]䫐|} @\bX2uuVO>\i\ݓCq!saeי[pj/:JS|!5iٻƑWyJv} gHv_XdmIHjA{.%٦dRYL{=u<[?5(AQ*^( SSW:C\M$*O N%hv=)({R{!MT{_K\۔TY%j#,/]4C^;IAKHuMQ~=lvɄl.x A[7ԧU$g[gd$09oOdbcS$?4Vީ*q; =:#~O\>| !ek$tNI>yyJ#hԲKRI^]i q?wvڀW;v3Oәl@č*cq,Ry)pm| uF4 SvIIwT$_I<\ZuT%qahvpu%p+J95$h[CeA%{ESRujR YAL^IoJPԄjR曅T9/S9Y cp@S8{~w 4דr;p}]ԡ Ħj)r,QeA+.&ULtN"LW併0wZfV a+iM$3,{kcho b_&a)J8Ah₦DH]=gqk):҃0ac ֻ3 9CN =[ҀٮqIn% JDqظFZ$EM*R:~Pb6`o6vAq'Cܹ[YMK!c07 Mա2yxv36 k)b$$^t;( ޱrT!Hh WЯEh| (> ͣ/Co70;niefsW!D-%CI&|V884D#lm%>2 .E7NE] _4s9f1ش% ȿ;1Υ=ŁOi!!z Q56ނHX׻=Wvʞ .)+hr7ŷEBJFM>d?z!~:S}BHGkA"^};'ϙu  p` E&u\j׿Zwo8bW[f6(v'×x'VUჹp O̿ _(߾L/L/F:ískN~S*GV *^o6f;>~Qis&ٸ״zo5KV|{_,b&9qୃs?cftl.[oA-.x`n%m:\ +UI 0iEB,W )AyNIʠ./ɽ:'MJr5jo#+bNt5ҁĵnDy 0bAd5-i!!`B8g2`EXWrMVY9,w"ǧt P4C 2u:W6fSgm *DRmB(N<\*!-4cG-6*kFMHwyٻDXj[6>{|Z441$56LD(ݥ(b:~+oX{Q!0Ds+e:!Ṵ~;K[ Ip$: Y'%U&dXڗ`GҲ6žZ[^sSv"3* Q8%/Gژ<8KY_L<]cch7n[q# 2i{ZNK2"E5+i`t21|sRcc%̶eۼ_ 3,ߞ6VKc,(. qW2)ul/*V=0`VXYɏp*rǰ)HQ%˨d*Wzv']34(W6F)R*q݁8r%o:P*^`N`8Z׏?DP29}X1vJ{: Kr'i$%-xPRH. bJO D;*Y-׷.Dya):O*l.}z߄}!^]7<@ft|ke]0XIΟhW6C_{ӿ'(uu~>\?mOR ۹5?Z#;uZ9S:ʇVtB 'f2*i>#]Mt)\k듌h}!w(q&Չ;a;Y0JBF BԱH|_%?gLdaP\# }[M˪$ 6B%d-XP.Ɂ aK^X/~ `/+ut!%D =%A R&}iyO^QO~|"D/L oL;`La(e)\*RWCO4+{3Se&MAVٹl}k9# ظ,V0"kl\ofR,]T^@2LS}Yu&9A< gKn2LƠmZD+ᙦ)ic@4ظ&t_5OٴiJظ|V*CŋِD>cWCE@HhGϙ{||hC+;*8r_| P >m%kTn\^TЏGgwLj:Ϗf|xu;7VCeco+i^|6FKl7C> 2aU=8hr"ѐe(8ⅱ$KV"[ŭ0_1DH1!ec:ֱwl+UWPH՗W``iG`ïqVsl$VWsKqC 9odVʩo-RMahgW/נ:Bq)]_HNә\5(56E4ј^[iT%Q0EzqwI4sD<'^3F,\N8Rcm=)gLJ yX%0v ֚nc _w?bƎQ"}c"wƑP/*ahgY<>j̘+fDLL5нn0Rz&/ZG)xÖQ{Vf)9c\`9KH0>`4ɦ䗱4ׇWeenZ go"yXih 9ȡaj¾i]bvQ ^?@#8M3昝A˶=4xژ>ڽ2Q>Pr{kg2뀻!TQ9ؽf`٤J9:P=kFUֵtDSrKmd Ij͛@$!5 H$ A']y#C3$I! {h S%zsԥkY UF_.pjHoq_ =98:<]&!] Cۚڷ-t"|Ǽ3 R]l[mz'o(@ 4e4IFH*RjH٪Z؄΂EXoZXwRIRJ14c<%%4=TK!"'Jwp  =4M_4z,r|s2svv"R^M`NA(ÐwO)0VƮ®6em( KmcoT%c*6g VIrc < ;': |\VEH\k}7XkߤC{')$ь9wEY΅uq`7>a 7;E3@BhA:(>Ooc4pFƣY&c16xnS5Tk ֢=}}^Y'RIsE @AKGhEѱI6H^52jtGD6߂¢aѬKfIg$澊ydlzyV^tJzs-aM6XZ2O;`K4֐is;8&LD iit⸵Ҁ;|mѦzGE@ؼ&ϗCxQ9mR0&u9o@ؼsF lpyR T¼>@X|i#=9QYDl. \c?h*M֠Lɱ52OWM UpH/4AW9/&bn^Cv\UꚼK@I3 BVY (mm=, 7仵>:Zk\;wo*МަCߡt4&w^YC*^ھa;yOP{}B(.?wSDQaHN u 1E bj Zk`4mF>R/Ug,CK#Rf,Ui K gÝot`Jc 0/" p9&A__uY<U_7bZNLj'霾ԏᚻ=z~y;Zu4 rotin)N`6 N\Ƥn6|FhM|`?ȧi6geL, i^<*@$Vdՙ eH?__Nc@?ZlL&?=Jr_~r<^MhMi_d/3эe<*B9)uN#k"s¿?Oț%5PCۼÚ:=C7uwo#:!\B:o(e !ȥq76Ns:FJq\y(CøhtNdݎUU9F71*taI6K@g^( I2iŚ_.kpCoE>7 nP {isD^$'0HJFSr fkd3,"ׅ Y52*{U@٪:gYլ6)fe_AQ spPR (Alb#Dž:JJlJ.LKGZuf‰5-u^`K]`,\|q5 ;!N1:X_ UYEDM"{Bϕb20o#u8>}^܈N*z$Ɂ:ft~rqCLqݑ<ڱ3]#Gp=-/]UY\kMn[׆kuj~¦raf@F vp[LAyC폫 ?-G|&'P>kڇ~\saN8̉ā%Ӵ/L wS*0PZ4KP>Yܗe>Ft{ͯ/_A:IMqLWA_~5A9F_33z{2=>8op~ODKw_( BhrՊ0)҇Ucj\jY1R3=y}qS- @^=Y3砟 (´q ߂ׂyc)v1_@,lUNgViЇǐ i9 do..^w )}i; Z#ㇹjBԐ&QjHvA(k51S2Lbhcy-B` 2[_:߿ܘbv'_3m?\KF\\߉PtfǨQ bkde:aΝ~s1-7]l{o@t!t y *:QG]4 z U%lL1o/ _*\G@-+O{%w0$C7i .J o޾$Cw r We<Cj\V(LD/HD<6j< $.~nŴvUضh̖+,S%s4NyQa%9QғG5#stޚs7tjeoQQtP *a[k p)DEL^kzŊE4'qapˇ|eiU2R?sSJ||͂K5 ɝ nCU[j-\jMk֓rmp\-Dz׺A:*~UFǯu$'7]G,QF LE5үDj,Wk1=;RM5,G/H")Wi"%KV1bcVÌ2)ʲ52jyvJ' {/` bǶpn7UNVYc]^=K0U]Hȵ(f?s[p͂)Gy"D{\Fb 0yXi k]6J w)Z 8 ˤ;lQ Poo8Rɢ:Vc%0X3;?9zXh<TК b{—$`k={HVh1ty9[Fk>TQC3FƬRyYkC֐}^^IP QRD)) @_@R Zq.ѬSr ψ5tjȪ5jKFuqLq{}[lBjQ}u=}NϚa5+ D'_%t2*5KFKM6$oF/ LP֢HrR/yC"1B7JaRs/sj{И3u|*Kh!Lh#eD"zG NE[p.Բ(P4~UP]o'G̏R OF-!b\ 029*RӁVB!|/oU]*ș hbYhPF `ä}i@Cldzt+¬`xQ"̜yX+1>԰c%t ݏ"̘NGK,آ~tk_;*.;ެ׻Xd:zM$ޝ/~v3_;ϖ*ݓn2kWfʬ5_ n=V !HLJIV&ek$ 3J+nzoc}ヌ=U}*_)kiCa=b%9nS&4N#H(!'ftD;ftIE3)@Wcܓ5Hp"8VYi+5bdbwZl8I䪝5B E4D^͡g+x9g B>9X,8[:doc޼^m3lǢDrvxDٕ+UĠIgYJbYnHf۝+yHex]6zO[)r7!py5f̎F= P_r@!i@#t>g^ CwhGPS/?əgԢ^*"m6p+Tma:ŞN0}lH):8p8|wmV#>dpCN.<ǞU?8=POjfL++d[KP6JG3ʏ\_|,MT *0Ζ>y'%ѭiᓔVΖ *)$yNMQ/a卶F)tJiFUSj ZkwwױݩgKz UcNi{ jK~ĕ`Gmˡ'e8dݮHO\p-웶DŲ;9 S>w*u/Y8kwgl pJU[-!BUTtBVɒeQ 0?:wTwWqoݣ U߾7^ht~5›GP?dp $Ax6S8jOu-CR=BQI̅4Ij$"seAI&E*?z2c۲"ZG:QE ;uo.Y#Oպ-ߒp-gŖFZ45cp&ՐWɰJbyIAbNT)ҟOFٱUdbӀ:Xv 5Ma_]2(|Rmzd: [WJ^5 nmxSG|h;٧C򆮆l*nnFRrAP)Ons[uڧem 0\N5e)yqXR6 ؜Xg-:8.5:֕Gg6%K-Wt:8>hig[m^A/%[1T(t<1}=-ZNhϘFIDjc:`o2*fổ-VHdc5>cSI]G %1fJ1ߐ0 {!dgU]z Q-BqFj '&3j4{_6JjUFSʤ4.A4zl2dRAִw<[tb [ɨ`TZ۽K<(A˖aC@2>D#kj0I:̾OQ.#3">J&\O/ޖ+17G.Fmc@Mꘌ `x)<'T2fӡ0lJ̤$T7l)#5!chrx{7GzCݭFG"fgeãDH_[]Z"\ۋaBV>R 0IԪDUJ̼? 40-JޙНAiSޖu*TVJ:UbXj%_2x-ݵ#PVxE "Ur.0EeIU݅a5૵tml?be}f򧗰LttOUD Tʴ FcKZQKh5EuJ,*3-vbTklB.x)PHKkh3l GgČ+$T\TfK`Aߒ)Xrb3̳8jś}Hxd#CpXeĈ#ZFh1b3 u(RkHǣ;am`Z0F`i9[Tp5ѱ捪ea{XS3>J DުwwevNvكk;g-i܌Tvh|ܲŢ5[:gy)/kBQQ._Qqo~H  mհ 6䌟mk9(J _V )_/3\%+}wyzϷ\{պK;Zc+y- ɿcаyL.o?=/%,|{s5?qL*'D IPpDf7Bn#DI eP1ʽjCp47}7{NpសjB>{W8lGo~21dksM%ޞ{*D&ǂDe?9B+~y1뭭p⺯68UvjcT j7]}J7z44Lf=V BL-g^=lp3ʙ}ÙLDŽddtQY)TeU2L3K'>z u/v40(C&4J4c3D7 D2sɹDRJGUL%0-uiG;c#:7FЈc`s JuČWj LLjꥵ%q6T'zm}"$k=5cX35HFWGOPcMZLf63qLgh3ٲvǫ{ƀW3{uM|r|,D2T!Krg ]z{DdP#@Q|Ӽk λnNroM\-axwfaG¾lUpNbY,Mү/n~ cJ|BV%WOn,S2̠~6÷'Viz:1x #+~-y4V;c R#ޖy/;B\5q*h&ί_7ϯtLһtt*_﵎e!QUtROO=N#IrDڽ)K7TF`GƊSE!3y2C:֨s @W?sY"f,r\&'ávRMYThlª5:nBƙWc}uZFkG@];ΰI)Ԙ1̔Xu"!KEگDu/% LY=ŤQLz 4zxFK"])mF1ѥF ^Gc`Z3CuAga#ٙ6B֞?=fvuKcr6|H??vLƺ}j%|e 9DCN!'1DNzNwb n<㚒ljDYLv>25p?OʑNT]ӆfۈKT{I]ْDZ;ʬ%W}}~`*YEeI;k8 i hb"Ș sr9$WF5U_~uêɋ̓f 9ŦyfT1?ԋ Fjx\usNas5zi5%Ho?ͦYg F幙ɟ}VP):zLЧ(N |TUEWt#3> j!E{Xb h#+f0L)#^ScմdE/V#n &BcUŐ\jh@&kI13ԓ3b7Zՙ>Eĕbc1͏q\ed[JZ=s`)&"7(*::PPʸMJgiEX17a[Ey8 Bjh~ao>6;A! [7HyHճmLέDD`3q̳v":rzn&3ӖdpP[D2 u*faLŠ2AeC&( 5Ÿi&e׶dF-$4&UwI ׃gLFt ^Z^u*lT)?-kKs({HPѝ!GTԹgkge UA.\SIi)R/] yH*29"oY)l.a3<.wSf mew 몾d4x)L.f؇rzp ؐH9L"E<),8nFf$ۍYu1ve z/{T|*Fn6zpуnUM7F"a#3tF~T QZ?>P\n4j-cU':/+lhΉ0STA)$_}$~rtYb)\5גn5ds$|wcImI FkM[ HX3B GߚKZm|lmW6~ݮw[~g^ ܣ7|l{rlܤ(Nö皬2당& sj޳d_~-i5SB="̶O#hp֊ ـ/EW wn:G÷p/:B/2{ 1;gckL=Zeovl[:fB%u-Βg(Dm94R|H9±6X17Ej&Uu)j0=#}T&QC۪|=L0^#F}tp &[a4|g&R{8$E @oWj8#ĿCǍcn uo4gBf'4%CnU D9T-T* )OcC F~̲)2z/X%,drT5}?d D &k m &0ϜD?7Lo~9SkpK.)73;*~cD"%*F_CN'M4\̂1(roO) ']>@XX.hT sbyhI-wwa?& 8yJ7$HfHg/T)ARÛ'nƤPF6ޒ )ѵ>[r[槭Bd0ڃkkn˳Hg';Έ G{C'{]5?^Jur#!^ygt)P~VRMvA-հœan<*͛P?j⋞lM[Cj@*Cn:(2̲\EM{b7Lrb-"YRB初/WmDi?'p3+]Z SdL C+}tFd1= ,3H)M2"a[ m )0yr9a(<eK&ZP mMMJx)Zst7}Э,&wY|7˰n3cCh5O_~"w9ߴl.[$13aȋ!&LezJ"А}?< r8Cq 1 0^뿒zq5ޤB&*wj4^M_oR)@ι;bYـO} pf[v=8 ^MF+ňl`cd2hR(HF -%vNt%6,kr9GGFF# 4x ǣ{1.\L- o#vSL / hsFqa W:bQ(U9{κø>T;A!2##\hBuE3MS3䋔LUk#0 [aU#ZoJ0KI8IQ7h &kƟ0GSz_`^ǣ+= i"3" 5m`4WJO<1`Y@>WC^^3 K) UyGu< 58ɧ땀}^/p޺+tn>L ,U~8QgBϳjs<:mA 컔 ̨ڟ|sM@L5{=g& Ifusӧy~_{^QGy׻!/ƽ,Fu >޽ 5irf̛j<|fC^L3f{"uHA1y棫< rfΠFF_k=Ƞ)37K+mVwPٵ>f0ʻeLZpܶKuw#{2j 軩{QKw&L¼M[/;wu.͗?m?zY񻔺k7wwEnuɊww1G1F>ĥHRt@̵۵~J9]v)/Hp!9o-0|B̜@ KHq vpH12@L< y!Ŧ'o3M F}tB rI %(x~?~YVr,6 G7{f3n`Q7"R!v/|@Zd_dIGԒ 3y1Q+Lqd Vk1G>[XVqܿBϺp̊ D1o|CmgFyu[{;xp$!7 b#GRrXhEDƝ^PEh aWIZ7Z0v}^ޮF}w[51ɑ!rfp]Hޮl)xxcQA==#%꟧8 x@}XBtO"fID(Py r0p%5W0>\-xu; ޜ 7,soQ^ [+z!ݚ}TnT%Z<=0f6o۵/kdr{r}^/pУl*Tv M jN6%w=qBԓ{4/ӹ$+9k>Z[͘i O͞{LT ư)| |ԸSgK6FroL 7v4{ )7yM7}cG*N*X rWYҨYEӟ7J8E>C´UgiHJGΐŞaH)Gn#xL5P q;*<rSfb}}y1r0l5LZӵmCx'0j4GS#9zvӻypVNmNa. ǜgT?_,/O~ԡ"j.}K~Ŗ"2M≭8|e l1$ !Ɣ<!l |!J^Y ƜWh@Tt/'$lt:oGrWjJ/D"P}=Ex9QȜV2 PMe$ɨslc W?%5H꡹JH)5oʪ߀։^]vTpZ`\6kO~vl|A' ht@#ub$sk/8ud޺q}كFqgSTwnǁ1V jˡwܮFc_战PTLI~4R"r 6$*A7  ØsKrI)xN,T\K4lul. umZP?JA%w%CaF-70- IAMo` 2HGQ/{Z,%!4X7ǂUߕ~4p@ ɤĴ)(%kI[W`0+.uD `Q&]) Jm#;`=]˳E!\0U_Z͢7;|óUETz*Dmx-|X9kחnD{Q̈|#(1URM5!1~r56;i/5vOV^ѫKXA8kCP6fPzո,1`;(N]RD|S~+ ~xPI=VEݧdki;,umgא)CS0Os>eznΖ̠q| 4\o1 Z:Ɋ2$^F"lVoQC,w7ރw('JY =87j<π]lf@qP+a} ۃ!w\0KSc,~̢bܬx6ONIyhW20L3$ X71Ef 6q3J7?oܳK\t1\B7E.jk<%{X0z.MH*NKo py쨮0hQaR5s$tqm4@| f%x$ 57ިY[ aTȯ^F I'6EQPyHg2S쮧F|ϣ,C8m .]Y1J~V_΍Q}Ӿ.9#qvX'Y++__ćDp`i 7>FuUAF; ט<>[gm8b)r ؔGS?J2(b@T׳h}I<IPa);JŔmX%Bj#cCL=Y 3GĒpoHq[ӸY"0mLE[ "bjR {m>c~[w:GuLv%bn }M*:u\khjzYJPSίAn8иl6 JqT(t,fEzt9?;)ߎ!w>/.y>9wV QC %(# xlcnKm-RԻp/L}%Q{6$0]g,I,'CHЍ֡QuIե(&4P'j[%v˳Q;r!4VW= ZFSBRX!R(h1aqW۩ԝF>ΣˇڣA9s !,ZrgW:_}5./TC1Wlֵ1ƂOkҺB"CbZt*cT koU\Tғꮸ2*RbIme{Cbb8L<F?5uI̅rD#Zİ+h73Iv]eǢtI} $|-reUlp%ƒƞ/>':(S8`˨Wu$b Yr.ѵ7Fϭ`gQ1G鬝+$ K7^pߜxɒ [t3iٻ6r$W>nI2>tLwe1;@lKZ(}"MH-ŪB&^LEDY'&GQ 4T%[*oS WreVu|L( .^60 SrGyԸ([ҵBEl[eg#js+.0>_f)Pγlbȁ{ˆR/ :[Vx D (3h)"ZMUOڛJZoR;KƘ(PsE6׫*H/`钷3^1Kghz7φ?oSھŬ ,( KHyn޾1w\%-s3f[6ӑ*҇6BL[u%M ĵB!V: bpc@U[iĄXƈ:3E_qRd=5RFV9f mm3\GG7O7WӲH%}ˤRl|{lz5se_G)'c2X)',n#B\_ڏb- xOU܁ MxwNc V론ŲJ u~_<%{LR22j7 t56U /ERRƍ6Yp_ڙ-UX^љ.y;T'2=pӓ3=lL KJu,ZONn A@7Zφ(`Т]* !ylsLbzpHx! %KxF9m U/8 hdf\)GmW@f9 +-S=PX!L/y3A2ي!l6U/\'XKjQ8b{2$Idz"THNGʹA! 7YfHjIu‹uї6:VXȇ[}:Ρub5bt^"bxB,v7vPu U/{D9 ~V"lL=c#ۺ{+6(ƃn<7xGȥ;N);m1 MTޭ9Qeiq҂+Z1mTOox_ qoP:`N Ieez TsY?:ug rėڡ9rn$2$r^B'h9Or*mS9IUNQ6I[ΡynH֤;DUHѼ /R*%0fytBh#| <:Hy .c{J^ 9')OGYWS{y2AOnP#-O0631h.K&^֬K9[~; u^֟<<}֧uʧ_i3ևviTNĬUx`^ļCu*r CMGuk*>{n\z1%dn:rݤnMy(!:c8Ⱥi9FvX|Z?E\Of]^N!n4k/:Ob@hBQKѴAįWV!ƚʳ1u e }0&m:`JTDԬT-$Ĩ"wZ8s[Ӝ*H ,kSĊH yjST9HGZZu3L\2D5e4NDB b:T$g@FeS=5+= ~Nx4V*|l˺5ש+J[sqhKW VU2Tػ\hjmuXZ>%Gna;CHiz93ai͋sh,P3Qp"f:o93Ԁޙb>% 8y`3C=Aք*mZW U *mzQ\o!Cpgy֤;r%9@R/ )ɣ; FN+rV[` RIQGˡϢf :hM|B own:p3nߜ /}\1K[6"?/0 ϷFW;WӍfjt_Klr3W?ՖqENnt_kwwުh&vru?g:&2~uJ îxSP+ݾEϝRe4u3Z{@ZMT/WӸ^v\Gbiz~bV&t&~/;3sۿjݯw4x^8uwۦ=~8/_u'mW:(SQ%,U,Sa"*,1bڪ+ib%.cRP!Q< Tѣz@eֶT J)ԋG60"vb'*U0+ՈAJވ{q$G٩;s0*9Pc13nX0S#vH`'n0TzGF7]vˡ(@S9WU97j]9CN|KaL펜کɜyyaƏKv+y =+D/j`?4iͦg%O 8KKv.~/.&"f<× {^݈&{Uwd6;Vb[Dvb e609M B=6kS-aV+d6XFs SwkrejSag$lNY2|ʁcvLbm8%jņulvڭ>AgqS0?㻳]WNa>wN̊?w?xYrGh6:rfMSM??=˛t48QV|Pgerљz塝5UD#vz|Mw{ϨK{1ykx7M2ew3ф>#>PH|9 YAy` zf[h|Yoc/Frb L6SB Hțoϑ9C s*Mxgj7o֙/|Gķwnz_U)])ʺ+K06ALm|Th.DwaTCl KjK#=$[9\n.箺/)ٽv/9*^Q4p^¯,D,-z#-Ś':x{ky$w]Fa]eoKpo& G/k)AW][ڈmU`?Kp/\\guf&Jۢ}䪯\ir; 2лz'\原ܡIuäV!eUxXPs#|kݞ"397 [bn9yQ SYnI7!muN u,7-KOn,t9Y *q?]/- ѩB$J[Dj 4I=Hɖk{yYsAyCHј#ƨ vo5Ɩi6~>]ӟGEǓhGN;(arlrl-V@J9nGIb\odxhĈ6ebpӤ)~qOw/d 6m-ӒRGUXAYA۽@UC]oGW~wd; d׆% zSHʎ346ϐUUտjNEP$WbT!{CT {eRAjE}吀eNn?~ӓW[P]UA2,HҦ7s״zݿ^nO $ L6s6 ,g v|Q'/K>Pb$ J DcNQ+Y@DDʝFa9U!TrرDIfI<̗a ȹ<"6Nivc92#,0L+%c`yRy1[ǰ, &N]r١47:Mr'5oT1d#h4"O~U$PEH|d+ TtD=ѥP<_Oh2icakI{t )LO7k7PJV* >ns,yw 5<P(gLMkJ<^>[\idO2e4ovr:k Qm$p皂9C xC M.JRjA4 ƸvH IxJ`|wh'5AO.$ĢD!@b{=FM#(1"RHrr -ϨxȶUjej0ujsJ=lb y2ca]}[7:QzGg~D3vT|3y%GG8!5 }t)[G z0>\v[\T?.=}30Fs7&ڻUU XXv;U7Ϝq3bO K_}> ݋g38_cbח {KFΆ̈ H3<ݖ)ap@\ u e`DDX#`,N!pk/<&"l3ޗڹ4'/5]/taDlRblEXWFF:'bX3Ss%~(%)ٖ7/::Ƹ&S=v: ڿ=Vꄴ)5P1|*`V'Be 1^c}Ea5cYl.e:L^Ё,h27H'`T|Pd 0]HI8y}yYϘNu+pIb{N1V`j;>)ȸ*-Bf &08/F ź 'hY0٫ >-Chœv0b.vEQF-yǟ] pqIOv\:o:=BbN-vlXsoʸB&C?:t}gggY/f82q::mjrT ]s9LJaJ=RQ\2w J sÁ^IX+zW~5;]!0d9`Ն녟|D^HM5٣+Gb{Rh29\vQRVQO)$'+Wk)-^bN7~8m+$0޴[5ST d/wk~YZ7Ǜؘ{o(=\u~{;;#;AmsD~שk/krX/Q]\a(^Q:Yr1dQISFb^kAόk)h?Jm:Gr\Xu*Si8a#8QB'i DY?Erwq*?*8cLkj/WVо}u9VAWkHxq߁Fݿ}v|[99BZEw(+~)c +F3aWi2b <ُ_KԙȝO_y]}+o?G%!Qy6h߭:;f߼]]%a)vn0H,*(`Ĕd|ˀ*O+.c鶕 >dHV7@N`Z6{nnwM΍zܨύzܨ<7jn]J*" ܊}^eua}oiL֕++,a\i J"\!6 bا>5>MiwXBTJxJ>xU+"Ry Ҝ,i},bZ&雤 0s+fTT"I YAw [){Uh0CvV3}ڑ@MJ{$I#у\\(>23əWI9QSR8^Ÿn__:|o"فw@kuQFߎ&ÍOeQLs;4&۹YPeaJ&L)g1A+ x. JS[(.ia;36"P3-Ul@5,c]L2#DC`—FfP |CKvi$|fɦ%!9Usz4S{:R s{%Yl=L2i6VI"x LZ+-9~@nT6d(´Q`lsы^\"\LwDXkRD0D5R3B9EIyI -bP<.+/2k4 cnUAĸk f]yz"B7WpXu)/S3Rm.,/CXm2[_|cH<&1Dg5UܧH{Lr58 'C^nCk:5I/LYd (tI0 qn⿔:'9>Reo+K8}7V_^/ew&nzR۷ujy`JeSbv&&N-G+&} 2S*&zn<FX0yo>{p#I) fg`6^cIMލyB܈ރ;^}C'n<Mw`MZ'S`tٚ:6=.S߈eOf$%8\r@6\j9ڭ%yQm%3Pmhuj+@|QE@mŋXɟI]}qbfYSX:pn`TB夼Zr)tJ=hs\MO|rZp"(ð\)`\>Ga46B|&B;w8XOZXerA;7G1)*WN+$ڽhR,ä|c-ӆ;-N#5]p @))b_2gqoWyÿ\]eqZZ=է)xklNSc7^Op>÷/K-ౙ+' 5,Ř,~!a; _(y {qצs /a܅hYDn4-IcM@9 W!xćgg4m)#iOv9aUn6z-Ơt93çç} oXdhJ27KYQ݂v]:Y\X#f! K$T1.}jb_w}xa.?]$-yǁ3EDئ,տEN2]Y$YVKrEzugX+d ;l|BCl]!t> 5}(D+"g̼G#F 5\ V篌2JpG4 yUOM9lZ* rۿ3nXeAm䢢)/듒򕅙Pɶ0"'G0+Y4yة fE+oc\ڑ 'T}CK0"'rP;xB hDom vgdKH\jsfkN)6M͆ŎǢdA`%EE !DLFw+Me#/MNFN"K #C$\>:gpnG^+Lj%Ncb(e0rAWTޕq$ЇmVC}? pֆ$x 䒔mgHjxI=RW89Uu]ktg(J($&YkE/Ϳ'W5lrSYfCd:G8c& dޱѦ*B\ɄԷI eAg$'{4Qd$%|HUڃ K XPg%})C[{J'w^CeoմUڃ6NiAU+&b'd78q78qjܠU2^ZW xRp-BbGΑI-I`u%g4}Q/ߚZjLZMw.FWEa ʥw!i 6OOo?:)h06oѤQѦٕ.w@THt_)=fsUMw#}ۚܽs0dPQp3G2 ʹ]ƕȃ g -_[akbۯ,[ |L$uX"uXa">l>Onshy->pv uA՘E Z mh)OWnf^a͓!^^*vRJHDvͤ[A}l-Ȗ{oF/sw&HQo}7$cuMix^In_~ۭ7n5>:nmu%وfX=,$ЃM=]67N}2mi1QcAH,9A9Η!dS^w1o9fqnJ҇Ha~ Xb(Lʜbp,bp\2 قKʌF RbR)錒'2Q(˷fe.8 OvnjJ\nF%[+s^3 IoiV%%T$DZčGdp`3"ŎN{O׹=Р N{b= N_ȃ5ͭs1$B ar InV ^w,,Mnd>W5̥iB;°1LKoDD%m[e  Fa`q(M$M)(Ɖ<`-5L[YeRJEiH@s~=;l2Y\f9,gq嬺̲izA\bF#s[9dØb30Yf×/'r*o#GD$I=xDZ|1;$zlWQ!ɂZ> Fr.䔂C,PIJhsbh  厗n!L XZZC !fxZs3AI9 Np6D'bSa۴Y̯۫(*֩2u^`ժ-wY *`yg"/ߏ*7EOA?ǟw !܃ɪy Z?=<Oys.Ɍ@Zu?gzfm[wH[?I^w{+*+ X>^syw<],~9/:&Hpy(/+CḭB)S8"ƾ[V+Њv e?̎+0NMzw;^5xgm^~7Ճu7ڢJa _&Qx FJ,eNQ姓/<(`7JE#nҽg4["cD "E.tan|+4 tddV,@HOgD珓y쓈G٬Js;RIu}}2xv2휟w2qcӧ'Ó᷏ǿT:o;|/|~ ߿|}qgew/`Poo!=ӽ/]/.ߏK ߪ >l:'k*̋K_G9/Nۛ0y h[:W` X^Rq:u&˅N}2]Fd*4EQK@8++nx/pβ:\ELeAk:{>i):?,]vj&2JoL/`?zFfܻr/_u_^c?.ǿq G.{;L:?Ԙ!/mºhy7<˱5K*Zy4~ MwJj` p_N}>>] FkߏM"S.>=0M?h O)$r5+b|_) 9s-03~5Y\ĊON+F65|Am q6۳-(ٌ 2ʹXuQ쓘scuf(gT l11~4./Et/$yǭ_MqoP+:,1\/"ыX~Kэ:p/˴I9i3he2%?@=+l3gGԠb$Y?tCGTD8҃#{pd#ˈvك#ّ]:%lpd\S2jd6`Z͂118u Gv-ym~A-FRk|gnfry^ \ڿbL]ΤAp_&cl_mW1!D1/+Z)<|ߵg??EOHX4(8IL[ rQB Syn=UI䓋4}Q'M_5/+"7*Uc#FϥRs}HZg傁򰚸{bBYu0d BYp*8L $g7R\ ŀkrAG5a1"8ȹ 7j f.nYͤȈ"c‰L(84(Ȕ)s BFyDPY  dn42IBpqzREIElE}#ĊOv0)@8)rh*cD 1couW%B`Aaj3'Cw[V3ksЈ$ .C$#&XbM@hJxIN 93,'6QcNw cbBg MF r kc/`$`(Ϥx1c-y1 X<0up2!6WV*HCp[AABRru`\Z-Q4"M7.D:XH }nW6: |5wHT;$MTR Q裴R*D!)Ǟ`m\B F`X4\0/#~+P7cI\llTd.K䣴M:JceZrno:9okhW L-e1AKFH0](K1pf$`9h Q\I+yCJ$7=`{RiAxK%JjSEfZeOK:O\&{%+ 6Ռ\ל МBqO4Y&kq\! xe`YK rD %A^HVNJ#ZA s`[j)ꌴ">=(<$CVC:'N<AC4jp<ţTVѼL<'e IkANAl^+.i_J։`Dȕ4jSM0/Ity6c%sBŲLKU[)]q q=EapT6.(,ނp;W*-4Z%YO$ FXiNvUHfHhJ`5n #Q5$R[.u"Eʁkc`xEuH dp ն@I3p {es(#_XX5yQuhO&k#=)/L q# h Ձ  &¹p15 ÝI3ԥ\LNS:+XGuyvMS=ښBB-F+!vR%_np@SDߐcxE#|7J_Y^VU1#Or/529\Y輎g֬_,GrV9 ?{M_Dh=};?i{ޏ[kP; @) p_=J\?k<&b%ڒi!hPq1VB}lc2eE?b^Ceb^u0p ڑ6g|84rg~1F$mG ;].K[r2*o~ Ŧ R,.,`|8(c^`憊\bE.W_ W˂M*w@Ҋ$Bz6xMOHnD13jZ> Dw XG ڣPVH}]1TVM=oix o#nf|eu6w-7?!پM Mm+Mc]k8RbҎӮ`"PNX Y@Pv86:(Qi "RcyY-::ɤX T]MBkiJ;v=SN$գ;C{ 滃Gzf8,:L哠"Z,#*XX#@G)ǚ¿]`&Վ]t.l:_Mq42sb6 Jԏٝ?pg6RG| \!gx- 4,/@ъ, r_ԔޚBu dWPOQ4cGMט() af/S'|K@"IlV #M0d-_薽;)Hʑ}I-0Gwތ&,yw]p5~i;&Ya _uVqeYqUE9lvҍ ~^Qcϸ__~sӧXǼýݾ_m#w#[}_ggcs/_}{O:}3~wYzOL#ǖYEyQn"6/CgLp~\VT! 0nmvDMFtbH >!a3&{q}񟎔^ J-Б+ȕKb%pOCع'وoUq׿!dG Z%[?6Oͻ?AKЭ,N-kq-sƯVNXgP^\\dCmfo6v-8sͥ+/.ͰcI,Vm.ϥIB9ءcW?7?"FAVGO%/b~q"NRpoǬ(fxǂYOsZl)ӗq':Os& X"XFc4uw"H@U qdd2n58ZK8 .} %Es%gbу%%(4YΔ4w]0# -`]1ȹ!gF gŰ9 G !-2|krBf"X!d/ȻUZK\2L eIUE?tFP&%Z*ƶC'PC (0vE+m*W31l9vג8]U3c d/^8Z_Dpo]]q ;? '$,WaBд_a#.hi;n[82* SK#5["/4yk͡+(#'JTXM9{G\ 4M)EՄ1.ugZ4(T JR9>#(a-4BQ)y Ghn^"@=ܜ]{PmZӮ~}{~겼x|EНڏw^rV> ̓9ķ{L0s5vdr#sVL9 +CQNQ26;"n¾ތw(P YsQ jT%@)4ҜNqWNQ۲+"VHԬV+W`%+;0s X XC› AtBjxKMɮkF5RA:+Uf]pJG(GvVMY: iΘ*[Qr4ZaH2-HF+?ǁ2ZArXMD9q\m=tŠw7 !>"Z*7 ;|;YeW ð`$HT]HtGԃ\yC8N}ExhgC,lr>Q#V|chgCeAΉ݅ 5'p"FZKAl.eo^kCXEljW $P?R66uq/RtW|ʪQ 4QFPPcTc6* J1A͖]o (0H Ħ3 sc_~8V:v*1j:q ai1 >N5ZK7\Z`l35[a dAfqhw Džt#fBwx)r+IGZ(69$XyuNJZBZXrF&kގocAvOChGÐm?=,A\%Fj G 8nQt,Ÿ^oJT_:%& )Meo7A%!gLaES׶֦LF'5f=kL4mc qR9c4hD&L{F9ڷIY6e)4*i6,o?0U]J`.DVheՀ`sHFk"°@HV$(gceSuUgP (^y ur$X$itxWj/xcn"`hOn?ޢ=fD$۷7Um= ^]_WWv%wS̓&cֿ<Oy^_ͷ8_Lx`d 䔜Mŕʹ;g#k] #sA'O[^_ !\EstJX|$pdL;6uA#ź݅n2떽X6r)ş3Z\ĨN;Rۜ@K-{m y*_H|_܎BʨY˒R5̨!"AYAe^y 8ų,\p gf\% f\Ad)uWғJdzfurZy8VbŽQV`г6agnc+D=NqCtVMh.9,w`":kCUOO8Sg8` i/'wk{s˿ ҭwr [vppw|7Z?5^l`쾺9{MdWOd*o}pf#Uy]6ZJVxvݙ|pb{{c@w%*>t 5qe8$*ܪXtRk_ 𨺼ɏ=\o,)D# "'i8t,ZZck N#9Ͷ*)N2惯1#͒H_( Kn P - U _Q'rv$;6SIg7|Կ <n=|H-' (- 9HH>dt#[@B9ɩ_Mr|_Mc>qfY~c<njggeܡ|LJ\6Kmp \ƓדF|WBJ>6n')2FV'ޱ#yF(G}D,9ڵ}]klf㸤(w4. ,NWƸN8OX:RFю~zr)v1P'\Dm[ˬI#Q| }cD`$Cp-HΓɓh C5P)R*])W[Q4Х1F4Z_b]@QfuGJdkgRqgPm~ I#0GVP6*J 4@"T[*T_[0}iITYJ&{p}4Y@6? 0zP~Ɨ]<E}xRiiKxͧ'b7CKԀh#2KXz C;R =e9*sQؙ&2AG2A\ej 'KXƚ1@''p2x荂ʡk1PIE]jwuU MUB[0"Uiy]6y,o(Ѭ'f:|͞7bٻm/WP߫ן.R2 ؊R1+ٔcbQi@ٔQcWMĶn:DwOnH_Qv(]qd]q.[.m]4#~׈ \c" tC3>yvm".P KFI S%://;L(qf5J%@i O@ 4 %B"]ҍ8]#A^61wC@ۥw uT#5IXVZGbo|6]-O1 lhNoK5ƂinA)g3+ 6Aqz vǖm ҶGCNa;vڗ(QBۜid,JT,g:|E rӲ^}(e!px9B$;g5cZ/`Ad|Aze.Ԫ]yrT]Kܙص>v!|gp9[r|eAO2m<|`v ?yl&j:Z1h,r7Kzs2-@tTNQDhN=)NN p:= N>9ncvS9R\ej:M욾[u&:R-<`0 ]+C,F'өOw3럛7˥rFfEù{|: RI cwD g ;~eW̏׻E0p1K`f{X2yc_T;5" kh H.cRVd51KFHV*`0kG H$vҧCV`0(U2S ).H$Q$\@h;nHi&$X D#)s!#YnrYXqO'NzLdqz`4c+OCgP3"}6v{uAAu}+բXFS)go rNP>wGHk6hJ0=ۃV3)G\X>*SԢ]è σC> ɏ?YHܐȗ,br$3*Ti$_)2Ҍ1r˜2\LJ` uqP%e@d},)>rLay'.>p"pP$n<<pʣA_no΅Neg?9ʬyVѯ2Or9b3Q6OmŰbiogҠ2m쭾ïowz".h㟭Z|3{{< ^L:^ͧEVȄ#bݒDr79}T^뻍G/wo&_ըxw޹;0 {Y? ڴT[HZ=[QH;[~b_׹KG2B3E  Ȧm Dxh,oxPX93ܪ\M)( ƈ8oT"gjTnc (ήb&8s gMAs@9qp[jĢ0Ӆ1~>1JT;#;zN(mo.Ϭf}^ٯ72M.WCe $H;7NA{x Ai x$j>~ߌ@@D'yx˸[$J5"ۨw.n3xn !O\DcdJ"&q?VbyPFtbۨݺ\ߒgEZV!!O\Dd p $$A,c7ư9|Cn>-o~Q~$}n0vLҺ D%ZFw(ʏ^X5y]TT-2'5C Kp@m`PloXP=)5!dNLZf-j߮I||5[z˩3X6_n:N|1sp qLPk0[ 0Nic3Qg*F&^@ 6Sƽպ`oWkɍ\4vmz6 {?[?]MLt+pPR=<uHr!r~1YٵO?r|nh^}_b{^xۉuz9z=qOWjzyls;N-]oɍw7N\'o -g^UQfF#,('gn&dBε 0|߿iqZhK?l(uj_r*>|>;^Wq3cS?;qf~b @ɮΖB(D 67_{$!gWjysH^:FM }(_]^;Q:pשZY_n7 ?ĺrLؕn։D<{6ߓYx))I7sDwDjFd-, %Ht}o*_q bR2a"jqɍf4\qH i-uF\##H~QP2'+H/`PvJT!$ E~;nXו0uAn넨d@*bTn I@^M$vI o3 /sV@mj”n?LZ a,f>eΗe6g $mbZx_$Z%@w#MN-=M'y5~fmJv؛`ްni!-*p>{ C܃xНNfA~ ?3onh޴x 8x5.Wfw! Ռ4rHKA}N>ڣf`.3zPHx!u`w!87\QIpϧ2U/$}gk $<:Me^<#^t}ğ҇+O~%=#k_ $F=*j>h4A:nj~wH?\/-C !(Շ=hwUDʣw5ʭCի}]OhuI@+(1jj_43MvQf1I;eIL7 nuxQnFluԤZ;(;IL:(8m\cB` ِ1<:y:L66y-dvn$By@s?Q,`3H T$.p}J+de0a70[i|a11n6qc0%0tB+d p$:R3 H )ԯ@Drz=I'̽ p, DP.0J:%@ be%H*FJ~Y5SHbMj'.LILDr$+aIic   A!97>T s()2 5:M3i3F2vpVSkZ:006۟*A% yG9. #.GV[߯~X2,4ZoMF0o>oV"LySv7J0 |ٟ☟ j=眼 XA 4K4a"4#'HP}uZ3:~b(7MQbӖ rՄ,2Qc.3G~?U27sj߮EV_yj !S@>8=ͮI|c%!^1EV4@HJy񘎉 Bh|{cHSRBɰ #HmsPEJ7*A} ܷ+ )K0@`2 SBKIViHQ!gC0jdu2|߻AK(o()WJ@i}'S%C2> |\5` 8`B8<ޅ%p1€cA8O(T 2kbN%KGpx~0*YK[Eisz'D.k<}4P=(,&cBXCT܂@VF|Fc/yn#" GK Ӑ=AN:̊gS:Gh#.ɘy].#ON::[?uS tK5'DwnX~[4BEԡf(+HÝwr?O;"-& 1ףu>{ȁk 8C.ҜaF&5cL;= o2)NoV$K"Ix>LM$*!b xqw_*U͉@NFVw @s fPJIp5IP^E[Gz$ſoR\{*f=">kAMֵj@oJ[i13$uO JJnS{ᦍe[2ԅ"ЯH}oDa5qr\u'.eov/ݢJVimL> g ԀLprH,8Ǿ[ ̼97qA,oME=v3=Q>$8%_!B.w*%BZw;DLy/~UX#1Tr`dh{ sTsQ,s[tߏQ'e+.x?u@- q\́"N#!w[Q~9%kৈ![ e?U[ᅍS@}q hNvW49]7QKO/T4K$ڊW[_Dju]-٥pObS[y]dYlK´S*o?c77,VZpؑ`QD$ú /hcپ(/((m{ r'v SֳL|' .Pgb2"T2VUN8kN "aVw$kzD[~10 {tTjP!ΥzrS"m ,<xl1+o~^b{9 f$S6@[Ԍ.%}:N~x-͉cT'MUnc]hpׂIw^%FE+53q Gj# FPB(D$Z^o%;tL&isxN$) IrB@M>'%L(Î*+gn)>Rw^),F} * `¹ `xV:iCT9z6YPH/ j8oG>jIc]- id}3uOCQ_絀 "ݹX#6qe@YJb٩ -uc8HJemU*;_تݯ0=c! ĥjU v8`%d?t&Wtq;qv{9g׉[ΡX4B+VY4Rߜ0T_f=\G@Essi ,U_,lwe]b&5s,;2XA%']M7O!-w7ժ}/Ө6aNU}Ԝ"N q:EfGycO:*It)ImJ ߇1}PgL޲`Q]o^&LFHȈ"\ ĠS:"}IR,}tuT:~K[w*:.qH⟏` vy76z ]vkkxǷz`էb_p5X2Utƿ}Rd(fKqσ O2Y_wLT"OYzE"п.M3\šL9>0ChW猀ZG\~~D)A4at"+5Pb$ [3u_ zƞ%'7̯~|zz6'>_7_sp r3rH/yy,ԸfK! ] 8I("J ٿ~-L%鿦irk3|~[͠)[Aoz0l$yjWܬcwJ3sA9xYedJ:hClQ_a;žF0jn&U}$jA omHv/9XJGcH͐S*@l"j&Qxej|{Xjq*]e-`$cp{ ""(|"8w\Wuwc(INLIe9 ̇*u_Aƪ BΡ)T˨a9qU6;"7DǏ6_Lfxaf_Jkk]zs ]xy4?pHx(! UH0Ci/~h g7'S\p;[e)vŨ9]@a!ʫVX(NZ\K?_z(X~D'R\a^gsNU5p_..Q!m˽89Qi4hMI"ݕ*_oyI7V=Ji+U\@oP>u`@.[@HwV.ŘH.>=&^\)(O\}%N  gSEi~E_#ཀྵ j_~Lv(l ƃ~u܌kj-Oz{Qf& \$hw9<)2"DM ByJӓ V@-3-4E\[Fd :h5g?n*F P$<O`n~ Pot<|,^dmAt0)ӰG*90@`Z%&:XWSS`^++1*3SlZɠ=3!ޯE v6UE$A\Xo\'M<|9Kϡ8بC.v! <lrz$pz8^QzZ;/6|jAǜ=+PB;; b2m#8LjB-~v[ljAPSvÈ?ጞp,[7v>(oVEҤ+JWE_~x GO^?!/# V-ur1j*4٫Yw3oy@(kxpH<H؇ #ٚ^T6O7_|>aKGRwOqy>O? 㳱:Sn7%sn%S*irl|qZX z,;ǡ*C{VVZO9{or(,΂4`zR2CjU>I_b`GQrutѽgUҴ^drit~[" N1X)?2\>{O\ sMSʔ& v|<7?' (*p Vr`F+w^!]qCҟD=:?]B]"q3:c=K5??/LO^qOwA8 L<뻍Mmj_Oay9HVJap1C"~DZwsV.N~Y(A/NKٚ:cKKWgE(v4Ƥawm#I }~_`؞3Lz`h:JlI+9f0}I(ɤDR^`32:fuN# C tKaV:li.׺`qZ h`>2"a"1 f\w;qYfu;[v0zwsoN.N6IeI֗~{zs輪/\JַM^E+\}\S0kv6Ε:q*9?tLtji!UB`[,`^%O4qʏ%'˶lu¢>L;ø3۱nXi꫷7`_aq{v_ugs{=$,YdeH7{FWڳc;U>ax|g3gOgz$ϡ0|lnV/U1ed~^)k>C~5tٌĤ@iyյڬw.禹Hg6uĨ~]SrAOC뾥u?}g:k;Uv'$[z埼dXwc Ywn߹}wn߹ ^4n, (wopf^u&QjuҸ^_TY/aǿHuiEI4s,7WWj<ɵkb4|luYL+?(ag g ~H8 ADSEOtw)ILBhr>{eYF?bE:KP֠C`'E! 4GK3̔o0GNIO֬fZoZvXjkm?ڡ$YJr_Iq.hF b"=)$A]uJGPR"R}"COœK4\PG]u+T>gZKb>F(V?I)2֘<@L(6KB6wwֶd_.܎Ao==b  0@q䀣[{]8{#0A7PIS,,PÚ( BpB+_3pJvSzOx"U BulzZkP[7w`I%@R{YlaXi"LabP!]Hݤ0u]q꺧abo&Q?>輸◗9IJ%Tl4W^c /#dLhI0V,/4kqfTF)+?* >qTÞkza&|4L{;.ޜɽ4(y=H@k5Ni'w/F _uOr=Sx\`'fH02M6ayּbG=Q",j?3ڃ/;3HdTGm27N~tz0n--Lm^tg+SYr=ӵ.-;X!Ġ )SP8/e|J0! &!qhqoQZrF8:xh{ҵ[fLI9'A):]ZfJ yMLK>x{xc3))0s}abgڨ{ŐO[1당j{3Lm3%] 87} 둯M`Gl 6Z7o2цAK[WRųzl7T(h|՚+A@?ڱ,*n_+ՙ5y6TH򕠘t.dz;`p<|vW=$h6lʥ8H4H4i*DFJ2aufqqC8qLp@ۦ[?L,RMVx-*^0bh Zi:5f̀㸕^du=em1e"ꠓmE|/P$MS1ԮTݓp v>ǚa}&rSo{}E*qZSΊ^V @3毫 ArE`6 6 .iĢ@#!p:ZD:~1'4Q"p]ml@|3^SQ$=cLý\+"xAu"&-OYv^ym絝v^yVwli1ks;ܾs;@nRWMez8yedzvU@#kv].kW]L;*O0k#HMYR֮E._+ i&ov6slP7m#~ț#=ʥ*bB ,,Ýgj'K9̶UG-.GYgn}u =Om^=X?oX2}0[^5X#!0(Dcj,ل(n${]!(E,5.o(pn;*1tٴ9c!Qy;0"Ȓ'* X[tLy$]vI%m|ą&dgsy(Pgakm rwhܣ=@5Ol FKWIj0 I&'ԈW\i{TMs1.mZVcɦd1iIF()6Z͉I*\z\P Ήhus'H7,瘚 eRk⢈a.qhL&6%s7+z[(R+R{)x׭tDfZd.`v-rIũFʉůbqF]Ա]9 j2JEhʎױ  z}i 6j}u E(spyJ#S &"c(!4`:2JbG+ X..B-%i+Mbc#/rTc hln SNk6} .EdCO?X .Vx!7Bƌ 3+c8ֈ0D &!,9N R# KBD-LhJ %3+ Wr3-`%{nFwg8l^F<,6~瀾OsZ|6]Y77q,_`STbs߃f핶~[XI^3)VYTAOVdrVj1hqh]DH0ehw-~i DZi\FQ__NO $QngӒvhZ1t7,0K&aַ޻O1qf\iV RÅv&SA5<|/)w`klZe ϒ75_>TNZ_IZc$RSuHU^_l8@uN$QHN>d%NoY.γXCOњOSʂ;6>)Wr>>SА\E :E+^m:.K^=Y- l:e6hHs/10P,šr_: $/J"@JA ~hsQW_T/aerb҃2{uCOͥ͡1ԚR)AxvYɴRTU$X%=̰lU,AXӏn$8~JrhVc.9溾@>QY"ʪ/cEmբ _e%(+kPZ!WW"~cl7㱤M'ku_Ye܏5~nU6i;ǴpE.͢RLyCE8 }85bX/rCa| mN y+mm7 F2:gWvɤ;7SR*PiJ\s8H*oy9Qi!$+w6Y`4%iVҐ!2#d(_mGѲtTS*P2n3esoڝ&'oUBUlrY:UToeu2|3Z.s b}Uy[]#oRGroA"e˖2 =6&S EYsiDPo$k I!I];&VZ.cо(BrZ .`͖e'ogOiG*B "Lxޕ6r#"nc'6yg0G%ЧGYr$9`KvRKjY>"w0ȌVUOXE(Ih3 p,i_l2N^TC&i!߫AQS7)3pHWT[<${Ǻo8 cxړkO~=qٓ/K2FYG R8< #D2*1HPf2A@8:t$w͉Rbbt\7݆!SM7OW=yɳDWsT(),gP$QJ(#r %22Ha(JdD1γ  uI:W;$+_ C]TQ\ׅL D6qRRdMź>w/6ՔJyNB+Tuiju` D0#0䩌c5."18R\Ju_?'$N,`rpzpn&U91aFtH"$2{DT91CPH.e)bԉRB{^V}|@E Sl$LHb_2RtRʐbŧPL (*e5qck˳nA QO[H"Hw{vʏc5WPM* jM[cq"VS^0O l9_w>~x=T ׋tk ?i\}L3_3lBMH(Τ|{p5bDCχw>q29ޗ3f(gl0)oױrDJNj5,3I!tU Q)=p!KF *B"[2ƊwJc Η)g{%-",W/ë/I6[NV!|4y$4S`+EE>`5-OƐb䧃Z"ڢI׎k-Sqvo{s\LbO .Fw-f5pb2xH 3.*ߵ16؈{.%dv8:݃2!#7(oQ! v[sp_f?{~͠xwptAc!F·?`Ӎ#Y=p33R7g0>8|?[Ẅ`r\[$-J>h況vX v%ݰPGf,y2G"g4 )RO꼟&0Nvv6 љ#M{K,S}rי:0]'F 8@e3;DEgP= :U^kޟ4aNgM klk奍ᶡZKP>>jB[%Ia0O Fgtb_]]9I|$$ɊvD+Oť:FՒK[``1"#KF??/gwCj$B>F;~sI)@ju>:(D4sp݂L݋FhiVM0n݉;"=m*D( M$k9RN]T5B-.\%PCZBb-qVN2.YsvPpZ:=8yb}n?>d7zhfhO.vh ;~w`@y\Z1n $Hv\+2zZQ ƺ͒j֖tBP{qftVOS%4vƈO7n窧 ngnK5>rnC>9&`$ڶ]ȗg6aS+N0SZYrIFx3u@'2<0kehހX6I>i\V:LGuW($8jwhuTE I␧㄄pLBǒ&hbE!I %#!I,cĈDfX}$9L2.0@sfTp$,TIu=逆r G00\^{@ G#ä+;oY1TB) ~XP;(pw/hSPy/v2V3).n-nx q0mx Q沋괼~LJpxyuwew!x;g*v VozƂLB8 ZXnt~kWT.Xbf뫝ANJC\qhMbWt4) gq/Wa7. ˓yrOPR;Z"n7`ũ4d{J.'l;ܥEZOEIJ Ҭq?֌JIVb`zP,o~%.t`A/}/P +L6E0-,ux.2E1LM ёRMh1 dMf{0{tFG ؘT J.p 3O}сoT2x{rr7Og6#-F E!H+@$w P:: ୀ_ʸ`J*p’S)@E3AT,י5`ؙQPKݰ W/@ hd vWH2 ngww;P%!vCLYP֜g5s Mٳ zZQ WJ",iC{jJv*51(:€0O D쨜HۮAX=eŏGuhEG !:\ @Y 2,f5qh!3 gsq>. ]zoٟRt-T{EDhe5$z3(6Ef_2D)^C $K4uc<2O]&浻dN9lm_%o= h  A *gJ虌R^;F 1 @sqhUW|I O7Z+_2<ّEg5bOhșy)FK0o/Kyaqt=v# MP"J~߇QA)u澡.@Y%r)Qw]KtAs1Cl 1S;֠ag8l{#eblQ%DFLJJڽk KyIH齇@FO@[E>!br~&XX2_QtFC QJ@Ӧu`.o8Zoai2JL.w!hP"0ӥP0Ɣ#&2Nya 9 , Œ)8, 8abBī[PvJ~5_uSEr9\B:ׁ )1DF^ٵ=Z @WʗX_ƫE4t7^z y,,ԫB7O/SUҩf-yzbIe׊TMZ-#W lXkt|t] 3P\3 oץM3_lܝ  o=#2)âwn8yNBTRRA3#jN RgZkiSz^Z çzy$Fpq5DW롯4]>I$!3fUi# &d@lHVfsxvċaJt@&Bi㘅 Bg T K K,aafR^ _B*}B@SFb+v!#R64J4V>b)2YE"u$1 Jɾ`0fi4 Ia(S` }{cmPEa a) 2s1Vކ2 @a\JG3j%UĤnTgz-Y2AʈQ²᪢HHH*) 2\-BI|SH*fZ܉$X#`@e ( [rP8/a(_bWd՟ɽ`MŗE15"#G _+$|u׆mM:$N-txܞB6-C9 CM8O8\tHx"cRYIqv߭\bTW׈ڸHI !H"$˅%Pwшק@E z2YY`iUgybK-1?qLeYC_fc N&G(~3($)>'khkBU|$䅋LqH2@,c),;@2/SoIs($ o6==)+ZJqd TR6 Hؑǁ(hݔw[ԾjTpٹ6V0Beg=[͜S íRlm|Rg "Hwt=Fne[̉,J@d2gXB;ّN&չ N&{8RtA c N4'_{`,s7V`Qԥe:D#ѕH*#$oN!;¥vap齅# v龽۶ Ðʠ |%!-!QL &w\䚼H\ 1ΝqCr(m>Ef\RHfl#7W@Z3*wsٜB%3Kv%F<]LXJQZi70CFR&uqd*6̦GZ4i["ie*2eKs56Ze63$IRIM#̦-)ZK='kRK7o'kKVzG{ 97WOntstD.Jm!$1D8Pr$˭^A`d$<頛F-#k(wz+QAϋ##3ׂ4yt3"QP3,9N? hFr=F90.qNlOcH4N e {PYŹЕFU_Ց`* x;fc/1C99]͗+u{{8O:>U A_W#Dmue,UΒI>8eLl>);y]_sS_~cgJ0bv?/[LY3I SLmq)t%7_~|8iwϔ+%LcWڪ @vCw_+?1 WxUwjss #P8}H9Fv{A5b[ #K)k*yot.Ѥ\rrd`W-vk"ITʼn/ֻ9uD0%C0QNuי.dpN;&ںH|t]c8t=.\R5i'I 52ėh0!ApAdFI`U] (8qgFeo9u5qqERۥyO~1mS_ʹWu]y:+9動3ްtTK\tRZOڐPSXXQDKiI+W\X㉉DZ$(1kb\yw05Cy gy ϮtO.x ,iYjdJJQ\m )EY'W0x'@z)& 8:Aի-R2-D[A[*~\1*ikzq+,2jT^FԳϕӦffHH@?;ܠS>/fXg~C9!Kudռ $ÂLؚ{ {g FT ĚRa+S :[v?52̫!"so+I(UZ #Q]GQEhѯ53kM 2E%Cof5|'1f/oJ-?#cxfJQLjci'QvGqcvQ Qٺ6 8JGGf4xD1=IO #1JY}YY"1 $&k8OM:F,ՕT~ rhRǤЈ>wT΂P 2"LBˈ浐5c ޙ)B0Gfk_UT 5])\IRA*f)Tͤ<&i+h6U(Ҿ3@W.ZƨEELert]U?\Q%e4ƙ=m,_0ET|OpҊR.pb;Tv|nk ~O?-'SΩlɔd{'SjY@q\ { 7h]ˠp୷fIuWW۟ov'"E.=3̠yO77m7|}az~ψ_d?N_=>Y–Bf On*2*VjdoC}rY3RPءnO[lz氹BJ]|įV9+8Z3]F&<A)Ѹ~2Ԡ?v<@6dM܇ҭۜ[9/i8y8 C5I;܁̮_0Ƙ! "Gn{#D)ڻމky t~gK&0BPOv5;xIKp`N%Sd~>.ǩ[Kt/h4E=\w^GX[_L[R@j7f-HC6[A+X iE1r TI|Pn: ҕBh! A5XKQ ?Ϸ_]댂^k8i%⫑JNn,b4Kff%b\IշNHs! פ( CRT fq)`4 D߿,xj"xK P'$76n@b]t:f-UZ)KdX7F|;d!ЫmC]3s;hBY&C`~zNMsLGphIFT#LMp:)jfQα:;p c9ng=^`Vl|0,au afT v6׿8Wˑ=mY1Ef#wrfie4g=mYp$Y0B˛ A{oQ_ 't>ᇖ#& Lx`{E}`z6k6U7Np`KAq9\{;)p̝ PÕvA>[y le=~v~rsP,[1F?}ZSrmz&߆+4DJ 3LS4Vjpia~>? F51 \&Foi@G'>W *Wn:o\DKdJ)1ztK!|| ڭ)mv;6D݊ݺo\DKd524nőW7f5?Z'tHYa4'<ĀĨjkHkCIMPjBmd~UkWq-kWT@"kk;#5D0FT2PՂ2kX Ļ PZj8^*@)δW8AIzG)$}P tix>Q1e,PmZ5, ǘ/my:+9iڷY~Q0BBq-).Flk7A([!YSF+v26qpڭtڭ EHmYA!Ym:+8 wZ<쳂$":V/赦zP3?!GVQm`w!`J8.I )Nl4^XBAT&2'u J{A XGgk{aHF-HQQpP6,QR 0q\!) :4mEͩ4dVM) 1Ȋ*lUĩ0H8_Pn^hL1BM-ư1hR rD6t.ru2nn]H7.E2]m&a&dj[tn1rj`mҁfo+v !߸m`?B5sη굑¹0peudZ/Qn}sMD0̒Z +jF .HiGA_;-EN(-u` / &mK[K-q˧֌굂Qbe:ʺ?ݬGNpA :r$ҍc]IgAb#:mn'km#G/99dIfq=$LvܬWزW3=~Ŗbnr?$jY*֋brņ+lMhvBB^)mnRi]>pքj.$䅋ePYmcw0W|EAη+moۤ3Qa< /Ŀe\Ֆ& o.h\דe=| o %a..b44=5hj5J@?,lӗ=m 3btut!DV; Ѳ&y8WM]=q8xMckwD}t}7 Jɭ'+el"4oΣMٙ_RNxꋝ:WWJJUnCVk&!>}FS1y'`#>g7xg\#'lzTݫ!<(B+<D!u$*ň1DQZDvwPGu4#^<(xj"}=ݥg -r#F~y3bϧ1~J'!?y߹\ O[Ԗss#Ԧ1x]cX3' i{hze ,>`mUJ>3{uNͲE8O-ktp7߮/|8&{Qz Yxr/V W̄ћ>Fm]$/w@f`pْ)QÖؒؠzLqft0t{t0&DUjX/YI ztXοNJנRQv KB KT~Ǜ{X%Zd$԰K1){F>|6q؇Ҧ0An滴)L}.m z6U>,J˞tʒYd '`W7&SO 7ӿr ^Ù[ƔdP(*d9$mRJjkjU6&m:"L)VU#Z`dn-0&u&Ud/pRWik{U/csIK֓Y_k./N'_h.t橮n~'sOMt\URk*_W~Z9h(㆛ Uytb$;U}u4 qԌQX;1Hy|65Y0H3s 'A50SM1V 33mT &-P@vַ/_ݝP.|PյkB4rߍr;0g[,jfyzYWGtU;K#6T@CjpMhpRe QO s(M%Ą c/ST^{xpo >W: c(:Ŗfc O ٝk,4i3t\n8;W59\K, CD/ˌ0DC^7\ {E;nsa4xzD0mp$~w( rHRi~|ssJgquc{I j^Q)eD|S]ϑ-<6~0TNRN$TNRN多SY h/$TѤ(|JSдML#)J[S皿'j>xMz.]`P^OpNmEG>8A$!˳y4~YiڨpSd-IR'I$zRUUQ"Ah/*b yEXur/+2#G`q;ļ!qˎP}L}iXY*7mnV{ǔ4$gd7!wDŽAx2yDPFL$Y+:N7,\xLZA{h7B@PC`@=E[浴PdRNby)AR⍓(J.LHĊȈ兴ڦvZog-[MeJl e+R>=a[w` C!=e=N57yӷeސhAJ]AY7"p1;=oANC} WRC|9-PV&;gdUxG~N 07! G>1I8zf&05[4]{L]#͇[Ũƭ"vإY̩kxlѤy `i@G~nƜ F=Tj6 yjfOq9yt;pmXjns~x5w-YiOEJTxݍܕ7l e|ve-w<ZN̽D>݇Q9Zʹ T*N<(`HFQ. X(BQz.qL8 NiK$%5Z=)WnTZ&2.2cY`y4/RO%ˉƧF M7O5-V2JrRB@IQ򮼡•M~(QًSR5% c=Gemb <фQbq7@_'am6G;Ax5G*"}oCrT@x7vIRo'IT[x+4XS'W0V`0Z@hA|+ _"% pcQHr#:!EAopMf %v@M]I!w5Ĩ8ǁcZZV0 x9b"`hI-iar7X~".u4?o7c)#"3ݱo.rxtn˟jwI398Z.Hz6áY i@n'($*\|; uc*M0~Qa*W #x[n R);^N^r3KkŎi*c1É#(AW* -vrTL׉:IYa[7&~Sxq\(1āJj]g7Q|j A(4G;f=Bݨau<|h> "ԣmQ)E l(2aImkf_x'd!2z5c8WHzt!F O7\L %Z^ P֠3ҴoQ8ǼGgCA0J S!hnS B(1ڲ3q1bgP1% p1&A 9]4< (OdE{!4% #Uch {ZSEvtik5=y<.8F _gK}**-($KE$E%3*q7R~>\~P0 \"=7I.(w*(ΘU.rJJy4*9Ϻ 8)#tU/@|(֍%H9"mzuT.G y)CgNofV.>{S(ULW_y0j|(;g>[5qÆ7ؗä)rKkoq1ě4Ȃp1 8hȵ rB7^,'» w ێ `|C~6MEy{EDb`}`Jn;uMc5ԇ't xcG̒-c{-4 m$[Q࿓oWo`Iga0CԆgU05Q&klccnS;_ME8 (D`JsõB|E#TϤҁYZV4JQNE[K]*|@;'*|AX0}v [ ,yFO;7ntGbE4̍n*KK٪J92L-ґU 0֋шC?. ".xMW;?|zz-t Z|z8 ՞2$- S:n(8) :+].|6eш>ĪG|Q^ {<^dîXAɅO *D.ŜW}NZ"j4N2gnTF>fU"{q| |GIF~ӋP,(U4`!K5ZFhSRxSxa$#Y=.p꽕`:-h\!E75= -s|EJhU}~ _ FI-DS;M{< #mcj|m 4-Juo݁XZh]w3kG#'Uw~-II'JmE *9hT3q Xi )zЄ{)U4Pi *y @ *9έ@~D!0QY(`42&Way\R/hNElb [o޸NpLz%,R#sY%y]x#HUf?M~"+IvkZ9l %B7rffh3#7֡0E`sw>]H:tf0D3j] \qW X ME:B*'eP<ۤ ٿ*Psq{NfvseRX擏/^q!.?<>'T~}&SѫOLLU𖰎ND|6=ıaQ+=Kڠ6^mepAf3/О q~h꬇RU+Vk*myيqxEJFN^)-Hk=M*=컌$.DS&;K1]y,E(uפ`BRx% (O\Ꮚ:X$B)ǂ|-%9,zVVBqf%sf҅ "CrD0LhP:/s@YiÌŘ 8lHs=AkV3 gl-N-Eݎ(o+)o"^IRopx!BJ}E-*[:0\_zb;P"pbMc(2eRQUi۹o+rE@=h?jJ$? ܏l 9j4S:2TR8BfJ,/W_3Ytij{VIuKz-01_FV>[!R{.J]ڻԾ F,>62-FezT4Zp-˅\:81ST(W[ˈUƋU'*@t^FΆ~l8(*O~XX@@a"-&9J UOHP72ľ2~s J(iKc`y!8AKx`N0T<G+sIR3.RIb8&&$Gcwmӎ0OKXkޔ( ʰy7;M')ȓ秠6Ƶ4a $s*au8D| R$#K@B4?(Qr:o7P|+M|AYP%Kc]2E.d?忬RnHm_ࠡmXOeާVPFLo*~xK*v/1٦zK*kC@,·_$TaF(kfi(VB}UT\&+okvXnEKJ:м]/T'S Y~rO^IuB:!ɬE.QZ]ٖ9)VcR%e}*ͻmH'ɼ2:f3K YdEF"9ɶ12*2>5uX_?%#Ehm%:}3Mz0 N | ׋-nMd-k kD/Y3M>b|kr[&NwzwiC^!B^V*}k`F\ϓj1Sba^e{?Og;) pg$,d_fC eYMG! [$e}E3ر.MO /$tqqoV$])_&gK1iWm䂷M'鼯zwi[jT_ί;sW1FBtWlg_%!XKv&LQiײ0]j!]I%(77\ּS)0L]A'\UA: ME޻kUf Ahf [}eed]V[qL Ҵrt*LHюye VD L[X鼄FY,pIB_v]s\gtSF@ߪ#$Ur.K.ɔZhnjK&:@ҕYLa90Yx:G& IM!U J8fre(2~+$DBٴy梑pi򩛋o ]a7wrsQN^02$A#OZw/ϬP맢Ϻfo&'b85+}].fM_iq>7m6,P$e4UYm:M?ज़%]g'9mu\9 nN(N:NtB6GϰWA4U f<-"/)pn\i\ ͚=Bm]sAv>Oѿ:QM`8xmMay Hsh)r qJpdD auW讦v2HY3ûmfj ؅cw1=ih$.RB6]Q Aq&٥>00TTJ\a)qMJh*TX2F LR4LM0-aCf9SP܉i$U5IO+CoPE(޲muv ^MJ"dU3֟Aͷnj<]UlpZP{3c)hJX6Y}+!'Looꀋ@):PiS'6 :Wsjm"ٹ+P#KnW/?=?E`em-َkkƕh3.ɦI.Wӆy6w.޿׻R6 zQ6E @9$۠h H^џDT-9R5H (,*]eDi'@ TGɓ..P+pZ4 @mH^w iί4I޿۵M:_H8:twٻFrcWvH*i' ܋ kg3?_=%yԭVG^ ֆmsU$bG:q8KqZ`nGd ? 4p2DWcB䑙4dFkBo6 > hq98ηzv|ۮSEOeu9oT7[zP!񭿌_oymi~>ED;M\uPR<`#Z@ΤhG|:Ep{,s8@n9>PZ-*xjđb-JhŝDOF EDMUӒJ ]UELz/>M-m7׿}x=a F-?)/|͉t0gO D duOpH+*2 gPO!(32E'zqڶ }\0㴶oZ [v< Y.(q%J&Gr"0b#đ c,!፤b(4Y,5E vN<3$3<` 6 !*jRk \Վj3*<O)蜯duhJW Z >ޔXk_wq²BcRD|8W??;C)#sjտ.*Oߟivz#_ߝI,@ﮮd>C퀾>}OHd@UCxJsf)FٜDrQ %~zw?JSv55H-o46/>}{j g<;XD79v-Q ۃ(lG7Apb3oi2e :<{kTfuw 1A-"q֋i ~vG,4A]3pxZqz#"l)`VJeאȧ6`$|Ĝ+g_$TFڊ(}c=R?Ph$Nrɶ"*['%0TkHboIK `* F Qs.8fRګZGRKr Cq^zAF!ke` MBw]!,2j,Ft<3nGL)zDLHێ4@SK,XlO.=/я3'>~u9&tCmx:Y~u/S4h5Fay ^9;QyM|Dl >n^m7^0;cs/EY6kU&F?)I _lQ3kc;*A[ s˗힐z`᳇i~*f5 +5?ĵqЉl|דz Aq* '2NjE֑)r=bUQpމ4+-%[lNjTN2ZܲII7dggx]z|sZ髳#ǜ_Q(|#v#{=}A&yp^}ڻjV~|"_^wcmݷǐbw'mNlcR^?0_+rc XSlFP8&Mv0fo뼚8EmEiZoיgY‘ʏtyTۣqDG/2+AHjY3/ )AtNeǵaHAt,kH!;5_Emm[y}-zKȩL*+_eU&9xeYsE5_v4c6#g#GӥۺlGZEsL|Io-V%/hWPSP!.S1'ϹBoVtH3qT.kl*2JPLc[WTQ%||u瘳u}"ņN΍/?̀g^[!fpc:n~2jo=}LshpgnkhjծR.nQdc\x_|x/I^8.=cFʼn'.'(*$yU[zr\u=0DA'K1dPQ,ڄVkn%GlHcr&Q hP20M?2ȻjfsL#e=[rdjnϢa 4S_(,9m(]$~5QIr1U) L.jjP@v[4*:QfbʒfnuCJ9v!ZIsbvbG (+}9]ڻ`94"V^n,C.'w{Tr{+obV?Ϟt+Ә ֦>,lIثU{dPeQ/I$,~t7 `$iMݍ>`ޠn^Aؑ6pYr; Y!Q:h^ Nk(? ,[xo.sIa>F\fc?sih>/t9yUeS$w+ͻdzjހ?-$F1λݸl[@4z7> kVi&f1]e~w0K wWV%ewD;%+{+1m/ ꕼz噸GJ|gRILv$%{L/Tܮ(hي͊]皗JqtdlZbyզ}l&:PcآdĉdTRb[V(,ZZI( yj[.^do=M sh?d.` 'nHVl*H,/2OXap}L <}s+}S:dN!4dlf[&G*-st Ay *`=zAH%J/^5@h:ʂJ-EPi6ɤ.kT$ߪJchm+%6`4mmC0JI/k׮nf5+g+Ʀ"9F9)%|vG/ˊ[k-lx w͢>~״q=oja؊o?ulg+a $-ٕbgⰓ͵pm8jSnH1v⹧r.SK_JvY=BIz;+$rˋ%#Lf ()6-)}Ownʤlw)!oh~V'JYFIR qTVVDkʬ G̵@򗁷zgSu(Hs!u* ?(sJ"mkJ6gF@>e$DSqDC+mHbSTG0pO$"ev{1}#:*dc,S%VeFFdFD_Α[FJf Zc_ڰLO:0y6#F<6Bb$rgy.ZSs69m9DȑJz[e2o8=zdA.)0JXoB@kg_v1_-fߙi+fw6bǖM~Bϯ}>[_Pz:=T[u#δMd#E=}uBA;w&ʐݤdCOc= p V ,)Rԧ$\'Ebk2)ϘQ!S3iHJ)W÷bMxsy8JXpokLFJh/"VS REӵPBJyj3LGcr/W^$2|y ^Fr#~HK47\N2j"5Ow@rl3IUfߵ?cq^~kݘԺMGoU<(Tt*\TFS|bߗ(ˤUwh8c}rIx){w߯ڬv [hP构3NQsX~A֪7wB #3ƲSkhoVVvJcӐۋzpӊb[rn=|%K\nH#IkOFd$toKdQ2?=8x'T:eTPUw^xh9ec|qi|/maHWY Ωb^qOe^aIN[z#[r^a!3S2&{thb Gb&1ӘZ=Jk$}թylDR[jG(Grb!xEz(![r fBO[Yo1蟤:3 RgGp2DjIAⲤ??w.ib&WSV{)66}UkM73ρbyw]ÃwX67Ùd}Xy%Ѻ|ij*SC nr_L%BK68'ʍHkT!wZy|nJWHn)2mZ:L*s>[ͷrߕSRjГX{1zr~\kp[-qȸpo7tq} %уsZ\o ^,?Y`ᙄBkP!*'BswwASvGTJ903k~˛9 MSžʈ 2D9Q^fF1X]w RzM[ޑp)4aBNUO%q{\e0˭$ ቖx ZyDo'E {Eu&L B 5QX4XUГ('{CljL~%\ZWz A~aGoqV3AMOo^20)bT5ɽ)4;֖%Z[V,RzW{$nvߡFvߝlAmwߝ%zwۘǻ$MRNu;wj|(,LynkcA^aErgOs&spWwƝ'7bfXF dfv~[z{sy"~V9śgl?QtΞ3 8ڋa-Bf6 +| &BW1Eg#E!{ ` e#1,,x`sl/7?_πf.{tM ۢw>Q! a ,C \lr2Fp9b*X"rix"`YP=Oze% ¾Ԣ{IӀfS,]KR|~ľt/K-x2I-v ORNϺHj/!mL? 05r~n~{Ipyǘ!{-[r`hhs5 ~Pz`k<+Xv oo@=zLzxVuYC%o0jv>Y&둑H)2%%0=\HrӃyXNdR8o+NVï"I$Z֗ OdǗ"R0V>V9AluE۲!#8Uvz鵗r^qvM~aQ'k1p;W88I't@)vhuV "֯~ColNc鷛z7ƒ!{"uf 0@VPjAhGW \iw ;MB.a8tzgsJ] ڡ'h¾5魤fr;5tPP=w4r~5 !#oE7U Jq7EZZ Ø ; Fu3ʑ:w0\Y H4zY*QT͵*mnA %좹Si.LsM5S[ )a1fϛ 򾉉k.cM_?EcabVvX CX WOm]:~hJWIOqFQM2dab1#x1k v,EG1zp1*sniLoL>JGlZqMjwlrY"lT t:g%lcQg;(\:vE<dOw}Gy ɠ$؟K'JtMױSDc>I')?CAbUPE#ᔣ1l!̥'x Y&㟚`sgQ 3SA;qzot&7#"l1E3D;9OBc(;r*^}ZVH)B 6 $P *A/s i krʈ#Am)ʍ@{`D}55滃M'"H/Zi 9b5gk V ;c7X&/ ZMFPMTgan~ }U u͇~TMk ~ՈZR`e|3p߭Ǧ->֡+c쓢Umq8H(E/'x X#4\g(3 <,F')k1GuCݢ:}Njk{t`zT#:o7Q.EPħ]oCZ2UݖCr~yZ=rv+jF9F+iOCqDJ,20~}WOȄ2Vq Jg5b%HN+uꠠܦ%^`([jTP2((ŧp[_r_YK>mƘvXJu*ӃETB I7E1wݱ'5..K>7e+&BnrXw!ڴ-JՕVLaZϮ/ |%8=GY e ϳi% yv;ԩvS^{0oHL2g^,~˄C23XRsT.<ՒRb f4o_g[8m%?Q+%EuշY Ub^}\{y%(05yQb\{RcIƢG!S0e xQB\c!,e))p&CVkWWW9$hmԞyإU"]@ j9󘀗!thΣh2%gH#1϶ /x=U*@E8P7MnmdQ[ۤ;k+z<[άlnqv`z@¤'*0b$k>]Tdz?ӈD!؟9aХ;:OIٵiNax.@k-;M^JƚUKEwffv,wlo,E',` RO uF:g6r+DI^<uelASh fFaSafeZgS2Kv;͕vlR*O→u~lGEH1=eK:!9z o54T*R5H2zگ:/}ZH8}78,>O^)`yzuÕ3<·Vs`u,5[+jhBiUQ?GƷ?gͅ58 q00VJL9LbB09 }PP"Wc*:c*뺘ZLSόFS#/\%Bt"Hz0VPS1/q4flAGJA^jCozäj^¬50:1-4e0 |rG ]_~YLvF|uѷ_G~]EJsc@sjCf(sI`9CFSˋtjCI"t_Kdb-̭[H*y("F*' $υXX5UFr%%unbY&.u]\&a ]yV<^2 kE/ a-DSm͹<6)4I +k "UDˬB41pHAĞ5*/f9\2CG@  1À"4ak 9|똲c1G}0{΀YJ#Cn~rWj`0seyEGv5: il\A3M2b:׊d>x1&\(ĈyQ4-WtCP`4GfV;`^P af!/c|?E)SG|8OoEIŐ!b${I@c,xDPw!0QǡeCZ"i#=nOunCtL)D!WTJ7*Z OF<\kOO㜣 2Fi^ςF:xo b 9jp8p,9֚ *<{MŸ _93čL/ݛ$b xཙ\]͸졿 (_>.`>RGH nwJ@]ۮ>)ۂ }FP5҃ () }]  H?,!&Y[ g*aـ2ƕr)ޙ\HO/3%'Щç>1 mNd:q}q~]Vryw7:` a6&=b,f" rk0)q=gV sI j'wӇK6^+gr <ȰNX f;'sdJN(-Yf$3u"'C\aADs1!&IqYuIUZSu4Bjoh]i'q){EypTh9'SwsN̷l+qFlr/f283>%տM2<Km6A8̎AV-,a*o&l+c]RFxQNnRDv1);ݤ#rT>h\HJ_ۚK; 4RpES' IUIT$J|C_[J=^yRyoq}n|/л4~2|8Y}p4NY^={hyo{-޿rє<‰|  =sϿCi/{0u}7A.}5>_l=q3&#q񜚅 D9pǽd҅]ZaN*t͉[0oE|,AaIE(NOc:9A#Z6g=hZqy5 kU5-ЄM(}m`ơ aֵSs7|m)h‚$v.Q4`o# ߽CLw0j/m_C *FVĶYK"C|wA3VNܤ0P[jq^R0zmJa "{XyiV4l}GjDksFTN‹cIdLoX%Ʊ½m:dmIxû}܌&ьہpYژ2{Oܨ/xV٧KZʼpQKǿQČ 𰩕.î٘ {H;ց՝Xp4]ysU6xt9X<_W wwbpm@b_iIfDiXy}p0:X{w!^Ʌ`n; Owfpv7Ot&Wl`L=SgU.RxoiӐYvbNkN,Mnyыw™ڒCⰶYSA+4w!VV߃}n1iϻ>9w ^J.bY)7гۇO϶ymo_LLHm20XՊ/aF!bVx/ΓaZ:@64 : z:]p1?ןkwJbrqxwXv iWZHwBn2jۡMwMr[[q; LI. ݜѰnh~ {h >b@6ZҴpT'[+uRܤVH^iB6<ڸЇ]d FŎ)$nT{ܬ6Bp_c=- n%m@|-[n*Ide6a2V̖-')揵 yBJ>w-Aps+[O5j0hǢsG+O59ZipSܵpy{+mGv+FYf2)ԉvk/wǔOQ> m7ϵxע>KI OnsI\j\{[ttn.:T(QZN3gZ?5޼;kao0_[V=|1l8%sTf YLƎm ^7)t+N+Hı 3R鮁lTR$X6Q*JVIAzxyX$9&>}]O*X)BjckƔe[͘de#ύ^9IFmTO>VVWvT;ID*s1Rl:yljo傰Nxs'O|Rli9wp옪Fd$;MNnR>0P$Y 'X~Jxe&֎ +׌J'Ɇ&-,V$"A ǨQzˮ'm#іbgK1ړ7xT3;+ל"?N7Qkc ~$jU"CŤ"i°쇡Z1#ª &*)&=#y2R+!+Q-=Ia LM_P+ Wf-NX$J7 ̯n8[^ptu5N_+U߄P XmLAOm%6,FIJ+b}4筈e׼> 8;< BpMl7֥uD"Uy`i*!T/?' u#Tylu지Tgzx#);ns޺z0)V}8GguMK nstd]dZu\k$u$ӹ`:Xys] U=vT 9) x^;'#OPۖs$HQ?zK#\1]tvTC<-wF:=;yo|V`L|\l'cM|N (jg4'B#En#?z3v 8Bg=LJ$y*gz7D"u[Z_ߋg?|G_߼0> eW &A}h$EtvnYG…j# rM+JݮR+*A!oN:& qGACLRq!KkD I*AUa=cn蹧ox{ua ʹK?rAh OpO1MMO0Sxfq(.7r,~c&ݜLb;jxB,2Xi%el>RXPC (W (N ~ZXCD"a%(Hg@Kg1CD a2PdYlB+p.ȃü p+/ ${3XüZ6; 1&@q/ #дp,q1p3e0?XށY+X3J-23TBtC  rBUpVNRܧ@- 愌NN_uS!wgv1'+w] kDWsZvC죘7*@ݹU0ENj;/@swN$ͺ+ Hɝ3 JA(XSl{EwP6)7O;4P{xBM^[vAT Ik!'3- ԮdꎓKI*p&GNS 譴n խOQ娣~ݨ1z̉"9"@nIGV% 6&}V0ȥ7kQ.e T,қO:=&@jw09N:BOKQ%Qr12 "OBHeJ="ieݵA`fwC22甉oH2LΡ\露)JjIr9mQl}}M4^>ԯ$;{gZK~mTVVг>xdO? -; frke1hL6#=mX@Z gn&=\$)Ll*F$DѴ¢L=ji>gej'FJJ' 3œb%`rb ] ^:)Oڨ*;*B "d.g0b)3$l$E]2 Y:$p2Г t(YD!bIh.['ΚkOq42 sd_YJ9{mM &9^1ˎ DkeRBA8ڮBӒgΖda*dP\YͶtI=E]XPGYŒV,0Dc->_(՜P-h0֓}3Vl2Rd {}Xh2nw=I 0 N_MI]l[kj<>7I$}ZџWw6& vYZJqD\r,i%eIڔ %)@[;~NT%oZjjcbd e=vɅD4 b#qTuȮ\HXO ^UߟќJלGٝ_Suy=GoӫWoxe5;~xgjzgE 2jl2e|҇ɢqJ^zxe\ 9b ˾$Ff{X ppH"KV4rsƖg&Ӓ%s,$Q%@KI޼ٜ.ǟI7曁!G(E>e? Nt8sgc;w )f97}I/59c^eh_Ihq[>{Gȡ.ɫ^^]|vsR*/?(t h?:^z;*qȿ >ыd!.$> i/JR{hvRocCCCo5Cb[!xݶv% ֣A=N3 ;w;RGm`WX ^z8z \؝B#soUywnrk ӿYzq鴌3vk>k* aЃx8J61 q-L<t$iWPZ{n;H˲AZ/ ՞fe Z*a[(SRCnmzm%] PͭuB箇,; jz֑$K*ءKWPbJi#]`j(  ن=Чqٓj]gmdRx9uIA<*Z / L R)HMDj/IbFzk=HXi#] nG <H 'V ["=rmQlO,^Aeց QECJGˁ@{uHQfCo.||s1WO/9WMr+zX^K W~Ӧׯ3l6~PmB ÞmFd{tR| ͟OO8 %N+5n{f9V꤁DSwNjEo( bB a3KҺp_\Fz\2>fV"7mv^h$'G%])$lsDN{ f/oQ헱] tA}خ}خ4k7*Y Tg8dȤ0VXoثZT:ksr39)B%Y{.f7ݰ [ {_^V _Vsj{q []Qmg)ty ȭf/)$[t"OZ2@^^zD:xDY**RȤ˝sy>M[|yŃ.l#!x+> #y"jT}?mOo@B$ AKa`"c@E%S˨ڀHd|0al'1ؗ{pϥ( +^l8g`''!_앗Q+7+k"Nwcq)szmtT!ac>{K\x٫zK[kQq9D_n(tp,$0DS7'@OR̭\8M0-fe.F*O|0X5˾L/+j-VrM㙍i+V՗֖_0Sx;i2M$6ɀϗFeImTFeIm\Rge09)Q0d%dP2b12w5_#R|v\EU};zUTTc4śi(?M>贆gw+a[J$}o%Wj+ǿUW U 1%rT:)XR9juƥ,S HAFr$-sr-. *R"|MǮԷ-r0{ۊ͒w߿rzh7ę#j2Tԇ'>]m*ϛx2cQ~;3We[W k[ IUn2ޒY4CDpHE!RJsr(tt3m朷ڤYMhv\W=Q[RK6U08>aX$k]$Bh!y<*ϣ>sҬ#P\6YhFZ89_\L%Ui_(߳eଝRH5aSTsҘhrӀ;AQ*a_HHiP`(s] HqY"_dp_hbi—v,B-r +&=o t\ b}zy2XbRy}}(VcyM=fo%9rOfw)Џh YtNPr8,Zϟ*5M V;I>3À#3J3ue) vw򳇪lYj,' yS*Sr9WQ WJ B^/%|]wUyy>0"-[!%`T/_|B9pL[<2XǞ׷CkAf~j%Yr!Qε,kMűr}S%29%|wO]*@dl&!df_[@o1̚Gl)2,6DpoNsh(~tB0b$†txp(Jw`~wjpEٝyX.UucZSV3"Ui G"0_m[\C\Gi@Ri@)p+  nl₹#Jܠer*&2GEĹw9TS;Xk"ZyI4zeQHҨ,Akh1tխ*Vk_4(R,Rg4]& p` oDI=\c Ԯ*h K6+Ѣ ٨&N5 lw|'! Ȇ@žy҄z'檼HA6%8\fAC[(Twܖr#Cm2 "` 5?uiHN#V,(gOt8bZhzfH9}Lњ j"19BT_W"?h/KqJfGE9&+tr 9NIH_ ՘b[M{CjXjN&(-3ͭZ 1Pʹfa"jP3H!o@S;CP~Q"[x 5*mGHkX V"JZ/آbp>Jm~:Ⱥڞ|:ӡ 8+WRO!m$H*Gb* 罓sp\M)z-k 7\[/)r֊? NGW<L7$}.<Ú= 7P:.66|^39X o ٯBsRR}qPp;ʔ H`sn3<+wZL5 >:!pqN!'oy#7ocm滦W|*wY  eJ!H ~+C\yfZ>%!(^200e G4 fa]C@#:X'$ґz$"HR k?P؜/iQӑfjd;B/h}yQGvW:Kl C~W"Woҩ |*$*s;D`Rd -ZRZk?MDY+j/'s(SfTYJw<9yXTg6m3 w w\|?;|í_$$g7(@1h.r40J? ŕzx^p}OU0}&Ǭkpg귨&{)k٪˯<NdDwV u6Y 1X_&9U-w>T)q99:IRC` ȣPc)$\\H#p٨Q:`}-܊T>XsE=\R9~K?:dhE ` |pZĬ@&_SOk"VMz֢v3P8@.%xIO21D-L:5_&f\=y]}bR3S>웎ȳ@ D>909roTq?__wu<ϒW!EWp0y R\ӕ~sS| AIba#8 Xej-QeP}#Xcx{9j~cqdBݼxQ 00{#OQ}6N$Ht{ ӕlݹ(G$+=d2DM˭y6/Z,Ndw?l}}ITQZ_*.vD}s;G` G(h%z] GHfb,q%6S|4DiwvY^ï~]-zUk-QSL]dPc޺n'h_7&X:_Jʴ-wDg޵ycXKy<'+%%Zuk)¬twaZh| dK&}q=S>񰨩S񰨩5_HE }fzuڵ3=ӽ{gzḭzgqs[dxrOn+}uzӑԢsZ׸#S@p; cņ SɴH ʞ{Eg3y+["* w>i}$<Ζ4}_ɋ"l63Sw>QvH~QM!tz]Ў P\B6 CX=cN0VZ YǘQ50]H(B$&ɃPf33jSۏl s?L^_!vh_"|ȖpGsײ*Ӝ+E4Fb.%6k/(J1jM:9=]jw_=\9\H'OAES0daFl r" j6Ft;TVٱ| u $a>ߺM#.C-2:IffrwmH_p2ߏpd|įd'WlvebkHb[&W,YU,]i5:TLz llہzI6I)^a!?XKg ?0D Qq`_}jM#l&,%;`QNQ%?Ǜ6pT8lRbvdث0 cҥ]u"ze'biB(6rӔ9b(f+ 0$(!qy)b&"n%CP"FJ2H.jrewN8CIh6x1e,,S+Jqc/ 1e!`7{  xni{KGr"u2hfBrQ3GQ81'!"B) m6WU+l[4l)/l[Lt?-3LΤa[*5#Z;%)pZ38j*$?G ZX03r'I}:+Ӭ٧vzBV[\>PPh`0O??b)"`. D' >lh<rw J\Bx*Nf_=?f wWY30pAc,,>'z5G*$!{ͳDv1Aֵ|߉=eGν>$hOꦺnv\*{/%„ '![:brZH!p<:5O_V_ wLpǕ1l65kڇs8KMLG\'GKc)6ZVwWRiQ U5aأjp;gUESOGC1N/hC!D!#h,v/~uH@[6;?c"[{wX:Sh^j畭+εl5{zlƆ衫usSIoz),[# G3 *lژ0 V"_Z`$ Η# `};`w}ےpҩ{uD}n2HQ9X])%tmݲMnmh W,l=%Fu;n2HQ9X]"(ԺnٌZ64䅫ND/2:ޥ+;':&jztsV;޴>8P7frΌ/lxt<{>NEŶk!G+M/nY0}H%ĞHX)Ҙڸr@{G2$,~H%z8,#&5 $pmQ)YKg)Ka{Kža21TZҚ>$OД{ x9zMɐ(EFOxuRE I4!XIyzLw?jV@Z!%R !'QbL!0'SHgbڱBHu?91U]4X1U n<} ;MQ-5ʙT|E&j/uID@{@BUNI89Qհ=WlRpy 1K}غ>Vvb|E1/W_m|ޤV}-b{O~wWRED58p̭Ŏr'Xt:Wr ]T4w:T4NbMaCDZ%Xy()i2Xt( / z̩TuHZ-uYkgtN(Qb]䡿+R>s>3z61AM&FIXt\)j(9$ xDĠUNףV U"P挲Jm\ U40'A cK .]'**WA-G*3s|rRFX@Q iݼ.:hб42L8ra/RVs[)6ͪB>H믗7L])=_.T5 nck$0aRa,+`WDq|+"Њ1FzH'RR &x(!qTbMRP!15 41/azR6C"4}]pw\kUyKĈZ1=Z0zAU.8w-os+j u 'rCyC'|<>?.v Hx]$>fA3 + vI7I-JJ˰H*{,PjO,"îV!;OZD$-/SPaBZ{8@=Hee{fQmՌL"3'W#r"@lL bw]izЮ <2RvCq). \ښOG+M:q&ɞb_GGN(RM~u<}HS0f0$ $k Z:kXD:/Mj0'?pLy0"؛!p°-q^26J|snC[ Xܪ~'ϕq}Y'8& )n֏\hKAn9J`0ea}*j,VWe",~ ( [>La}(Ucj/}ĬiFceVX/7muinҰ%EԎ郟2x&y `yڻ 8RsFaTHaKig*01v-30=z䞁U&Ds*xD:6!Zй=bVM1)#Զg,KIW\!kAp%b uK>dwiL=V^T{Q3|!<cΉah*O,iI{[KkȖגVk)afX'=Ӟ.&H#zvcFtsÙeKar<|)Ou}Q2^y-ggg dKra{pjk[%eNcEȹӳ,}f.Bd%Wb(6l's:> H?ͅ$>VnԱ\lNXx]a2W3:9d9$r%zO2`B3Dľfҁ &ErZMbՇ6"X8IGcK|:|Z D8/Ǖs~``fca>3Vl;`4FTYܷ)qhlgʚ8_Aewv:* [3^QEqH=1}xHM  PG~YyW0b 0Iw\3G$-;r9>}{{r \3peDR){1(84k}»;^lq8!ɕ2T]< cՆ7M?+ :)Xq)7Yc>s] O}EY^V785G}GHMQb:pԾ^wɞTv/_Zk90Z}/WZ)jG,Wn' vD](Z-WV1 ג﹢cr6"9C LitB)-/)`PLi׸LQ3F#DL l6^[*p! :E5,LPXA Z8] LjaGQ\q|Y)>L< (zX{ QO+B4+ofbl)y TNlé?kN{hn6O[xJ fO%;U#vvteK0;BT%;)k'K4tuagXJ& C!Ǚ`2ۣ j  ؄R)z>e DSwVftM3#=1:vBH3t gڣ*r9DsCCOAzݾ @#JsVSTQ:t/hhv][P.=,R^At)b')ġ'A){{:Æ"=9X!'~3ГUN(X`pqr73E`o zY C(&pyA0L{#etL&(Zߋ2;s?䖧7 &^.L9խY"a~;O/(d旧ː>XtJhIE?ߵ:ZVGhQZ-ꭎnG΀2 |ˌ99I\l s2\:T暱^ ,gd}ܒƬD/goVnr f$?t=ȿ眭ަڤ.Wogyy0-hڃM #o"YK_"j$iMYU.}j#s)j˵9sJh(B JgeLS* Ilٛ#GC$&T$^q!sv9:Lkz]PM͗a3d ٢w~*`rN B僁20Rf=/vՒ̊sz[rek>qC(%#@tY3lvheZ l6ٱCv2E[2ǃmJ1[^Bynr0V>r\HJ!.ޯ4hGO0YBuw3R9(3R9WCu3"fCvB&|օ!ʐ4 Ab+8TuN~jfb9Z%!I[1R_w-һEpdjP_|.hՁI66J+2VY^{4l#u À1ϱr1 T* Hy^&Esc_}&=Bc ;z+c@kM&9ؗ'j4 I?ya !)?^x&Q+'VQc*4TJ;6#r@4%,K\QM);#g,ءރ۩x_^”d|`MS=4ꗷgۍ2SLqg }=?bp`z9Qr>Pp3~jٸ~zWPK]IQhVk2{Y镇hQh9S1J%` c$2",,Zcc y$AFRiͨAwyO,dU.L4!hy.q-M)&' _NIȨɒ, @fC֑>HyM6ZC*v7R˘yn o9D.C.{oX"%_H9J!D9stD d@8ͺI@&?E~@YZuEVa'?,ߓޟ:՗쏌~~ᛓY^^\nOOi3X^N/B?sF|K&٩\iy|pfNFֆ6F}@>R?Pe[+p+&pDBUR2E4G|nldt?$똍u{9!&oFrjn &,gҢ\k֊P )FҜdKa9$jC_󇱂1k[m7iit:ptyawt3[Xa ^ʋmP$%RXV\$Q!JoŅ~iŜq$Pϴ4!OvB:||{tu%ɷUQj`ѧe ˿}ZQ?mCbLn`:ȧ`:MJ 4Jck1Ic):rvx97rWt)AvĉU M+Yg 6i\X%ڰv#\ԚaL;aЁJ 'ߋ`LjRd^Xfe"2YdsKpc1+M YW Zq#j-Z6yC8o͏w4&LBPfri9Wg J/>\$S$]1c,:AFtѓdz9m#YNΆ/ÖIq-XL(Kāp!DӮ81|3}YSᏃVVwq^H %+?.V{J$=(0pn\BX] DI+!y>DbBj7z36@K=}bs˄ܖ@)ay'̈́RcQ3A_#3n6m/HC J:9[j$ iF{lr9Jy?q*SFrԸ[KCxڃ\^x<٫A^8-Ip4`p{wzwY릷ۂv6 {e1xUimN=wqe}ƞ,(2(P.wQ ~'XE %$AhgY*PJux!(hs$*h2֯'Onm:W'+ ZΎ%WG%'>HL_Z\6?/sBR&iXRԀq'RrʜτEXo4IgԋQqrWUS%/ʛzysYq|yycf/Ky~`"ꮫp k]w|cO;E' \O+OzWiEo&(GV'p׉ ihB8}%d,ASpc0Nc12F*xdjBĥBE~oƱAU uMcc/v7r VEJ**1MHdM, c 7+f0,LHb1(NPJJB !sӘ|FcM݀)/q= cQڳ u?1z,Mmwb!}tMaq,0Z]Y3Ja!i4 6[d"gNlnsTBNg%{u/dϱf!, EQ!3Y״Z<2cl.^c&0`>u8P.'"~I&vY.HxEɮ@./7#]W1k0J   tw ] ,Yb)BClT" j* =OmdV"I;w̪Kƶe, AnjlD 9|'"쀛EW޾m+ݦ&߰ǰ^Ng ʼ~ h]f=Wn?Yd|o>8-: q~d"X}bV4h4ޚLQ簴-O'A&pG_EqSh4د[!#;"XT+[ĒV&*rBދe ,JP{gx|8Z.c콙\ /veĴ峥, Z<~w4^_E_V/ݾ2R큿V5+bǯDf<㵱lU2|#?nt)e6Ñ2ps>Fɱ6Fmf‰sʬ|e ۞ LWD-9z|UJ\'ݪ(AUH^)M"q _5KmmiKmgΗ\6/]%nB%\vJ4ui;:8:~ 0qz+uZLpg FH_] K^K!40@_]8Tʘ3Z3e9K`]͸^L͍00pc 6veNt^6.;=L)u7K~/Ϭ*L 2Uߦj]\1ڠR醩Io 6.b:E۳;񱍮 ;$9uppeærûYxT!`GkLJ^^<Ӝ'6YuY~?<Z74Xk]U&sL,ͺi1Ne|I /.Qu9rҷ㌰ Z=$跊p{RA8WLv'<7sj\9B V Zgi5ܚY%ݜd۩GOopmta,ǘj wL==ȭ[~2NE7*E91pu-_ܲ>] hc55F㻘M K&.)olgk\l4 2 -l4 AK!8YሩWSk!te=VrslHO5%($*A7J'}xϟ~=T+?ǐRuɮ@55Y2SOZ~ D ՉSeRNt*E5ʐː&',UTAd](?Uf]"z3(OtraI)U#龻]}1ӫ0ZiF/j1r)TBX:B(Kͥ|`ιtyeWۥ>5G zi/x 6,Q:łgruzI}3}y58WƖbN^_EŢln3JʮVb3:FUT\\݆*{^!_O鹒G S,P6Zt<7gW7?_ǁ,eJmu*,:?lMrc.vK)Fh;S댷Y8%;=Нt(:>Z嶵‰[psC<*988:5pAמ2;P̍*ةjIsY ;esK)^{KFod{:-.B WXKq 8g RoS9 )?}u`#k7pۻO 3sق'&5$ eؚPQfJ#JMMˆј$:(1F,6R +{m'=,/(rF6 *ԚF3@YfM'#٫Na2!W.)(ދG@$Rڕӏ^D1ef;{5zu2 xo?EKs 9w8 Vaz 5=o c  *Mߢ$L善&7N8?g)tsh:쏯`Q!ciEByZ╠Ps0 mh51BX =A|qސ.,L E6jZ.Hኡ*Y u~=[{>B}il|`5M_ fM+pf vZeV[w[^0xG!B䲜̽nȾR-y5>}o^^\^[~2J{Ys:6YڵmڟHkHضqU2U9@ H+fP ]{fV (Opk%ZB2E7eɻUi&Ç P<+NudnP4([Sn7v;,*W% : D`ŘD) 5E,E&5K8$ax,fNv~Ů{"U ݐHᤐPʨp[kF(y D1D$ k TS>u` WN 8)/arL&q-T $XH,@iDQ+HiͩRp(u)uy(v{h|'[lG#AbN7$r?b&qnE{4b0*Sgֻ 6n$;r݀]},<dA s`{)v")d&ҭ}-t󩛐#7Y@ \zrQrKgXx$!߸V)}3eGI0;*b%:QJt<-jn5H7.dxvSpڭ"^S\(BƵ[Ձzk$Mža1JD_M*ML($ Y0)51Rv |t {uo;3 ƱC塟=uVe1rμ~̶Y~yy)y:7wݥ 9@_+՝5)xhFAMv}9UlϜͤ8 ȺPri /F{ oieO~-س;Xm&aȈ T$ 7>Ojsvo b?|1pk6y<+c9ZR1 hDL4NWHKVKki"xP!8V<Ć%PBmd&h;lܐ [ )?ݛ91XLVhؑ'`׸O6rW<~me)-$U(ftos&7B_~)3}vfde.fRVCL)?=,!,پP%D%5B, ~<}*rr+kdh*$n_N2"E ]b- ھ?EkfhlJbz'#IBtf5}r'\y ,dyޛݘeJIb\ͬ]vzasR:Yĥ}/^xDž T$nٻ7#WŎ3eGa3w^ *u{rh &ERbdk Y3"" sCyޗΓ?5 63QoLݽh#RP%5d%F_7ُzDHߕ ]J$sB8ϸ JCu46DJȩ:z /c_+l +aWs]U/6Fz-(cq,) UޖJl;;G!A2ujik痑%Xn YTYpbDU٩*Ȑ|3$ߍdwƼl&i3(`fq(eIRIe%1Q|DhЧ3Ffk1#1o0<R5ѫ</4Č֬3P#SIc}C>J(A5j~FPFlׂ vZ|hg k1oO}ZIRJ:whXefDGƣIq|f=wi4BvǧG'eE)!;8m](ݜOCCF ꗪ̓?ܿD9λ봹ZcWz,k#zr@6Lr˵Z\>Scy{.>B[`yIpQ^MdWbl틠 %-M,m1:PXZ!*(8J; ?/w5YM-1\(?|L>iç7a]hF#+Hi! }A]D)I+@{ރV,$w'tU@EkB-djuHA A5'j6@Yk &~B{*%u<7MPF &:ADF[+d& >3\Hz*ѿ D!-#*.o_E3kJdW ցoWʤO;/YJʉGZprpTaNsu[{BˎKz;k_T"0t|-W..2moU/V}zVlTv:o_ּKI[ukVC@^y8+R5f;\-?6( PpaY~+*3(JPV[Xfw섳ڑ3ƚR>钣H+&Oeh9(@ÎXY48s`ĄO-6OR T,_]Ԓv2D{]˸dybybybyMqJ%Dh xhὉ4D"9K͈P\K\\(8wS_g:SIμ焋bΕG8.eɱ3`uj&9nm`2(m\ђJjÑ<1޲?*l$7es3yξU|uF( z]2wucQmJ^:,B@)_ -h߂/>RSAw9f>:l .䘵ϖ=bv-yS9D.W"Yc1\]kq3^T3>)]1".zƣ!Ca1AxPь1KE2r3V-ڒ3rV?r],lH.4D/3J.WLX((/NBSW%a$ll:nO09d"HÉ'W1,zm#pfT.px!xA$(48"52GpۼzttYԗGԊ7r#baD/bKĭgrj "fc &j NnRWmM 9vmne|0bUBFL岕,z_"O]VU`ÀCO]X Xڟ׼K?_۳PȂEɤ/8}tNP/G^nt)`Z )h:gCutVߋ bڗLT#6Re}6!HR6EQ$C)dy&B~d/ܖ +l72Rhzs {S;S;1aw9aV.[9U*N!ː?ćw'Kښfd/4H봇\{<0?3}K ~X~T1鿟Xb~tp9VO9]4T-3KxwuTIsyHBE"RZ#6X g}ǿOwR7'Rh ToZjCz,|< .[|̀ŏkzZg~@V˻ uq:IzJ)7u:ӻt H+VP\` 5j@֌^zwCWC~Xͽ@{9/nJ!rFqžChC/Tj"if4c,95b5Mh>uOOuVT?tR{3á*ϖՒ )xCLKXv+ H̥HǬl ݇Z\ҙ 9)e72[X 6k3FInSFLIx{Lkmwi"҆_?̈57թ%6MI$و Ȣ_=iO"2V4#KGU\}O1;diICm >O&f/P&&dODҁ2,X &KC6Xy@9,{P %#SL==ZO~^L6 HO>SҾ&U=aʭ@V`m"Qz BL1 jab3\&5"T%3m•;kl"D{B0Ь 8RZ,}wHXbTp6ĒZ}/pM#w(BzQ7 j9Bpn̫u\[h@4vJ]" D+ :EDŲG<>πk ["A xj!*|9%9y ,犗 ] 8O5).Bs>lK%RE} 7Oy:An^?/%NO8^)0Oy/:CB) TA85X%@Cy(מxXl:Q':y;T&C=öxŅ*=p5 (;]Us9*%̢à+Yt$4ġ `9 A;N7 ꄠռW'501PodVw $*.H*_"̫&uᠿAbKѡʣ-/QY(+ [T(ZnVi$эuӓS{Я=Mˆm֯(;0Q åƁ1UvxC+=)2 /Hdt#юEih|3ae|T4 .4;:()n$[5:8NhauQ:kPS{3Y7crm6ǼڂƽMM AĨfW+3cL\-z}8=Tzh,]Y̠5≔\ fp錂B+lg 4.b| e02tǥu#zp&mԘ$ sDb8&c Ĵ`8M8#,%@3~egh7))n4@) `J.+DG\Ha>w:B,/RhkQ$x'[uP *d2-ȔG4<]`,Y"T6xRBmfiA "\(J$~Sh6-G[ K*l7IeIBM2y蠥fA~`'O/3H?Dʄ7FcrEץTcJvl7fQn`6@53)SZoq+ƈ,3.#FR1-Ɖ2noNs4'ry;1&/nK4ͪ)ƙ*`Xhs(|uIR,z|#t˴vlu/eՏ^>sU%E툋\9Pnnpϫ$ſ{퓩QqW*]v}y|GO3J }-tTvT)xtZo_ % )Q.*3|nO>펩{Eƻ5/Ȗ^) Gm`(T$.+<"gO oCPysvJj}z@A!P]t<0@ 1#-5KaXm- i&t^|xbuiOn}Do!)]죽;!;߆<%]`Sv ũi9|z5{(;?a6OjvZ9Qr3D-1?N:m=J[ք`h7 ;ͧ_G2QC.g}LZqkkn wiךf#جf{0Y o sCP ͧ}LJx4O/i<&ϣ_\bw\_76*Nx4̟/G| (zd[-[ щa !$z0סOɿÏM὘R{dt} `knON% ES8k7յsqw1}}8Ǐ dg0D 1Ah:(; ǁ9gCہϝzW5kGe0Vbn~ztjn^J`Q4q򝢹Dse#~nWK<[ltw\bPpi~Q !`{jDJ_Kٔ~9S%\z;GFI4H WVt`[O}wQ NWXBJY*C#w'[n#9Wxٵ }0B^Ip;/@TUWI@ ETbKGuV^b#p€: j4uˎ-1onZ|@efoS$xԿۅ$S9=M VyUtl&@خp6u˾5QzG ?w𻻻pzloqj7'oⅺXB7tx ΍b)TnW-=@6Σ>]_>}0N={FPxάZ,'<%R.S΢5xJcힺȉ*R w%F`Sh=^z@QHL2y?ͦ UM`.{K"eX={TcɑfCdN|n_L'WF[s&ZI8<)wV\<\&$G#.~ݣJx?~mm7`I_4}+Srv "=.hcoMv2RZN70nePX3Rzf靍憯~X‡͗ #|Oy&+|}*oop`ž f́KMC*-\'De9)'\EžLhi9IIawBAX"Gѓ~+eKsvzYkӷ{U-%9M oz"x/R馧'fW޲ɳ~=}Y|=7%4VZgugWDîyqqqqY#o$!R[8Qܙ g6J Z*Kr >ț~L0SJr4Ҭs4Rq̻  & >P.#w;8[ 8?*'hbL*֘|l疣<$Vf 63ejs&o@0m;GiTR13^P^Q͘0!59 *XJ."a*(Hlql6s1 tʥ5ͩ PR#iy*9v #V f]fK $='~Κ!EtrYk($}He5@*%PObã5;XuoʇU&4`y*DLaqrӝ0 UW xszO^$׌wjUQfe+Y{-8ÀyыOVOWӅ ]7to:Omj7GPH1gJe! jnʷUZp SNfS%tq킯DI5Q*U->)L4PuR|ّ?D~$CW:;8nQ U~vd?+M~modj֯7‘ROa{S/4z0Xw_+F4j#p~I)k"Ԃ[p&nHIݼN ϝh)B竖9p.Dsgaq)13[-HH|Ɠ#RVwXV^&Qs{ݗGDywD)Mqo2r$QwT ׫ɨU[թmb1zXq1I|Ϯ|ًi$W%ntjctAvZAh0%4obJ74_52@cM`D-WyC[21\+lϢ\R̋_+I%%Ȃ$q`DeK#Şij``<`f`ɐP `K}WQD#xYTń\z3GF"9Bre*=E-A:Pz%b"/*+S4SXHyPa&}J Q\D4W'h$DXx4&w=կK]nwb/8888Xe;O5cXodܴ5\0X&\ܚX`xPr/ˮwC%Q*~Kv]Jl;ad KbA!p- 7KDRmqpJrT{Jql5(LAk]nE+ihxg$?,r%a(/8+M93Hќi=?Hae<@u B,GZrF 'rQm9ۍ*Q656U7Vg!e>?B#VK9΁u ֋&#wC^+ȼX =8_ "2Drop|?w;`@ۃ$Q,eQXb:L|W~|=E!'Wro=:bBQE ooaQ LQ1\2?%/FE m]~HGp;PEag#,έUd4V9 ؾfI kM6*iHPLACB=$0hn s^1LEiΘF~d ¤8UtX'Pjsa6>y&KX*AE)bDk [K[CCl,sw[JH§O"ܨ&O!U;gT1D}Iܣy;[gܧ+o8T>P.N|ͧx}z?]65ǵk(7w͜wrIMs?!eDvoMuSI/B[lP*dIm]h\uQ;Ґ &qW-qJv#`S# . 4I2?crm]g=l5[) ;АX8Η~x?TJ g<zBd&AeQYpƉzD ; ?vo)9M3?35Ͳ9}%gR3zg Bڪꝶ5ЭJ](xf1j 9yXL.3=h\`ÃQ.;[R=NR-L [h~}@4SmBC&䥫vޚ:C A˗ϺB/AgVd 1)EC#`%<8ZŒKͶnVmV}.:Mv9)J\9P<ȓ7nO=l-xQxT}F; Uh,By&%'_|E&j>~ZT0KV(HސiTU@@| 6.]Gg7׋'z 9ޭWxD bs<ܿ|pn-{] f RYhi_I̶%I\`xoO'iQ˝S[KtK ~>0%aSt_cW#P@g±Rf^=*Z.~|AH{ZEi{RWWqb3C3~Wc19Mj%8jҶuKKYlKk5/N F.:lo\ X>6-·1+o_eTyTu;"A.Ϲo)lEo|`_:A@Jg5PDZOn6Jk]i0FFѭ/Rz/Rz\.dsjrFw1$gHF˝Gp`GjX'ma/^WLE6\3zU7duEı@]}K%q-勺o>%MRd1I߷TtT:Z 3ƺVgS}#*kj]qtѵ]BtT9BTcKq<6ܑ)EzJyl}P-[ß 5RN?jlLM;XP;Xczl)q @,˵SGt@$ փh8 Sc31˚c(Hp\s}Ƚƚ(08[qB2*I; dd. &xka"ZZ!(X nEK|UJ3(6"*瞡U<\9! Ȁ1ȴ4H# ~ REk}sph$)sxs:}(S''»8+ tUW'6A4ʼnR9R Ix$-1TXŸk"#ǑHqe7\">aB!Hrr`8`ՀArijxD^A2; K=>v"h[PR-oPJ!/)AA1cӹ=ಒwLl% ^zr]K(. ',^Ux8SSs$ a fV`ak8GY@7Vㆁ]F4e_ADKg-BB~wO~Fbz&r/RRW,>6f2X`W[?+s>KyBfok!:j^Pd ?[^tGԇk$#MXZF`e LÜy͆JFkTW PbEa١L)uifƏp`kr!1 - c'!{ h2G M4?`aBZ=Zm=-kWx4Z޵6r,翲K O_x C8'OhI.O5u!%2 W4fj6Ǖd9:ZW"^lS%K20IHA iP"jfL ֲNO?ݤ#vJD +CyZ&LM(->UQG66,zPzS&`o-['ΐ'MEI `kߍgqz<-Q6fWPF6\-OcZS.tsQ?Gfeeb6E]!-)`(b00\]Dfzj}d7Oq 1huT)wm2`/LF"lV xxޣztw3EBgtq[~oPOɫ5 i08ApWPOv!R(49ȤN`Pt*LIF~f[Qt=(&"{2,PBp)Y,G8VK?.]$UNL'3)̢Bl(g"{1i61 {֑M]i)da)3O|PR!+`ayrʲ!!%0$F-Avў9OI&5y/Xy` Y I.omod&[LN\PfŃW|STZs<i9Trc򮾗'F-'ʌ% 0Z&X>sC"AvH ;\.2xOr+dr 5Obh$pL)ᄑu۶SCs0ӛBt(%1f0{AcK NuIb<+RoDkk_kkn ჋C݀ %=˃ }Jn\|2rwjVy;o?k KM F Si=`ϱ laWu}s1;l1Y4O|c$~jl1anv8LUk. (H) H?R1E)0>Dlyݲ\;Iek'iǴLYiGly0}o3p37qJAq%~-eRL>z_&f %xRf)Iu6*cҀ.;e1) :|RX:Z{$/vm6JL1 _(1hGfQB$P9D/7YX 6˲3,6 ץLP-,8iea᛺L*%yXkm,Lˮ1yVSQCP ʎ˖v\DD1˲)ab4G{6~VAԣ¨!ֽAi!rAܛQrbHm7푟l[#L:B OXjM>H@ň,,1K^Z'@ut4/8y8Xp-TAK-{zS8G9]o}mjLdPҵ I lySSkv O6Av<|y{vL?=L+jTQ[W#3r̭)|˧cyqY>1y9TCRsn$Ǯ2ugY>.9NeYKĩ&%(RF12bm\emB^zr(,cݒ =AQ.ٯ.Sq:(PkR J+ yS\VhJE`a3}CNqo˘|3cuš3EתTR8[ɜ6'muf5xkpɒ DadƘdABR[NACz-hy][cVw%jDTGHr O.ZoBI#BH:XkD'<[$cTFJ\ܡjD@Oj%ZZS81짽 jo|{ڣU잝.ihcow).#rTf0ajLC:/{K希Zk(iWMV?[ǟp֐tQWN|GnQs(gҏ^!/.'pv`7]mKϒ@K*EEFB0yeFWeGZO8-^:BYrBƘ\k1Rqd)[2R9}6]MKPV˵ZacWگE^ʂ$\ j79l_jY۲W8[tH &:m+#붳o*J)Wa#m٫0+y4- Ru*{6Knz( YO+9\;qKCKYXtnJ'!$)Mƅ|Κj |g9,dcB-#FF-ՖMyڱ!+=yJq>mn=zhfE]{UT𑂀|((ev2Bҵi}!Eǵfa.~zaFi>}87r<܇Ot@s6]\p`}_3}/_VŎRCV?/Q{@8[:Jmy]6&vPp{ދi Pc8D߄Rh?[6b!qفWtKmh(.R"n*EC.6t^z հ^85҅^&烴m% 5JX8~uq3q@gLsq(wPr;Ń8a/PC{_[) %&M&͖w<kDAq&Wwh4Y AW#7R6^@iT ސL' 8n3=!D0XݥA^76s8wp8ׅQq$Sa_@@ZK9H;E;lw0`Q/^ ;كLV3B@_c:+s۠D݆TH x` _o9\\6# /}ƏΩQǿs^%dw@..! \ol>v6KckЖ t=5d;GCG$6îrvTz! lKzsazҺu:%C4R%')tN䏒ȯZwD~ӜQ UixI򵸀l\N |[O4wޱ\c0{#$4u<}d胕Ҡb 6 z|3$m3}Z)F1M> &mWzN xDu3Ȇ, [l렔<݉g]m>KwmKrxBjvu8 2nQxV:9Ҷy~mzUӝZ) * 5clHx^?~}|lƬ3b{uc'c^:8ހ2M@b&T"d6:"f;>GQo7gw3sݞ`5dsrcf8|lgm3}xK 4O kG̹JiX1<چX# hC}CC׈bPcrNbp5K2mIZ$19Ȧ+x{r=ԝ K'^xsD:VLX! -N"ܔ9<[l|OC^ٜ{sg~~yDJ)z%$Xhsah@iP\a3H< ?>hw0y8q DAH:ĮLq_ee=y5Rs;UP%I:у l -l>jd#3K KxDI}Fz8IJR`8*, ZrW5 +^ d-2Jao8#!?N0蕙OrQW8j+|f$M5ofy~o/'8獚-#@:gQrD}Mߏ(@*#Bvٸ\qyƕ7GYϪ [Fki(5pkԐGugb6͵ǽ][* toHD][vm?HPnh;w-N2$g0,Kli/=E TEp֍7Jn֒muXCw租mLf]_& ,Q Np4`nkޟM#}g$mW pq!õ ^@ \kǮ'D7k[ 8 CC V6N{ ؉_ ?Pz%.O>e韫wu զP2(.T6ݡl'oNS ?z:ۄRzN2ZTk{t~Nhm erQllb@Rm $j6 嵆GO<=rWt# W*zOpbɱ,lNKS-ݙ;4O>|8wu; (t>nC74hCsociCQuM28F XM7ϽZfbɹ훲G@K!]Rƽyo˹ 5M'A O[.yۨ7Y@.嶹YVZWj$txMc׽M0} 8$M纝 8uEm> j#2n 9KJc iHabԦUL $XG.tdй l͎: O|o3IFk )IT70yY*3I*aB": #Pn;? 9#@Uŋ鵛;r 3obbXeP)\ F\푎}313]EU2;v隱0,b{Jm ٠;'.+$Q'Duk=O-+(9`Z`xᾃTqIRX;_:329" (Tr AFX66$2?4rn#97_hf?.:uRQ YcB;Vɔ ZG%s̜\crQXb,ʉ(!NZpCPUKb[%`5xY8Gk,?Tލ]c;'qDF/0{ Tl 6յ eޠkWN*, {(3(gX 1wc5*tk61=l{IΓ'):OR=!{Mï%{2ilPLE3<#-!(,!+ /,qMDLhPRbIDْ&rM=sf8.CD5I!6|e᠍ tl V V,kf+ET ':$k6K[Q &הF1}my\I}pNiu\Ȑ:&4؉n)gS:ƙUITaدX?dd!4?6LNt`<9`G@PJIJ}kONIeJ 'Nq0ӣ[p AK] c.`3Wt =qϮ}8H޼7u 0`3pv^rC3vP)i=?qonԣmA\..jvZK Ui~#Ne ʑ { ˁyl`6K&)E'OZ[#7EG2wMm$(13AGt71nz:&\!aI`0nH6 I(UZSYy&`ųQZF0dMUy#Iƴ,%шnE7Ш #󄯘5XlT#RLWƨgO27h+WX!H+l9a|2Y!J_@9Sބ"Ϥ٧\O9 L\:7:3/x0ӂ WR@%Sp&&%\Mw1b#D4V(0Mdwk)epj?@l-b}Zk. yfysӏkJQeUJ̝ `ٵϮz9/IJ٧ssʼn0iu p4hA΋@=zBH`{k#'$p'<%}m'Uax"Dֱi"x7OeaDl%حY`RpjEqLw\uE }'t/ ) [pom#V语?ڝ(3~BϿnV\m]Z?s= ۛݝ?9{v?y? j4svww^{;؇<?{=؃pݷ;?O{>4'{uяut7§F|{-wo޵;;׮94Z8~iupP;{_NisUnfDityFtҖ*e~DqiU4NSٸ9M:}C$m0x=-Y0.MMU|b=ռ:a|/.LTA'}q)[I8Tcܖ{ ieX"6v Nqn|X TUyueҩ/\'\/W>ҙѯ_UF~~շ1zگ<ʯe5O[*|jN$鄿\^k4Wn#h5[her% G2Iw*u8I/·_BccqQ5lF 3jo1+ G `~*Qv{m|TRn^{+_3cM}h/]x{z o~_p0FYIU筝P#޶[v?7,ԽoJwvik oy_:rs(Fе?wnTlt^uEvкK,w z/?1xH78h_B5trgҏPC]I{+o/o{UQ2 Cli^t  [\UQH8rRY̓BA|yG\nz)zY 3h5JH؟N{r`ZqvKo'X|J \Nmhρɿb:z ӛM4=Թ sK1 m_޼BfC_TnѤŷ1n^H1Q8] yeQ7VBx96 2e-bďR!VºXLhsMr.> s\sͤſ0BH3K/&No6ֽlyD {1EVS)atbveqZJFoOw`A&LJPSEoRze*jFqREg^edm&iۖ>m6?Ϫv~vcumkk֘kz}*%S/?m[V[M5Xp5fB-2EHjy$.XK1\ce@u/bmֶmyYfmmֶYO'_B .gXm#(KAWS(Kd"4?}m;w C9zgӃN3B=&t((\gs%Pt2"9ivu2ȩ AJE'+6I{>ePV@Ye4?te* 9p@1pboifbHLk4˧NUA>"|Bx(ӽ*=zG7.C+0c4UX}1 ` {$q(a`n-Eh* <  R0$$M/K4,&kÑ'NT>"b F^x$*j}YG04 VGͭD4*8V3z4SHj dD$g25“m$l,l6 ,l'lG)8>q#ɚB^PZ0`υ*1#$ h;-T(Ȋ(HJ wq"A +΂M6MKvF"&)sm1BIk#!!(FLj)DT,Tyv1B7 MUYfns~[2vY"2U|t'rs+*G ˎłYc -Lgr߻=*-&w;9CC+==)Sv(_al#Kq5p)&=5z ,rԤ(Hs`IYcDΙ293KTc2-lh5&L^HdvJeimYfi;?i ;GBj(خ?8PAI<"0[xϴuHFbt$#׵~{`= Q_.?hWV-ki|3ѵ$kƌe:`$QH%c4eQfBY EdQE&/B EmYfQED}25=Ryq|ڋIf#lEB dcQjZ ]%\<}]; C U>0Q6͛vQAuVD\5L}1 gaDU"Ss! ^1ɩ\r>j)Hgv})n,l$a^2mYfa09~=c. +yq\V+ "S[Q,EN$LPDM; iй=}M;!w ϶1pvqNj*i@ X sNCR/?sӃpVPI2`} CB 8/:*-,=ÊYVCQ&iRHﴳ^+rt;djhj[>|T>Ae l0~?uS|n )f:5ـZZW۾,qIaFf,vm7vZs +q'ň<a)KizWTLUHuz kb:6ƨ4 C3yй]Wg:ZdA0f< 9r"`JE+y;HmLz{.H qB`$ŧx3!d9`Y5}2ለ.Tr?>wbKj G)gJyԖ:('a  9~(yg@s̟n {$cϕO>qX516z <~*!&([dQlB)P70TdfRGn &JcTfŇ ]*~÷K}+dԿTh뛭Z*a:Prӽz)_=9zwo;n!Àgv8|V#3"#/`bXyvWn#Ҳl'DuWS]]` dvG䩈s"RUO5Wo~?B]go k[N-2n^]/^CܪϷ?|Vo;UaN*NaȐ ?>Z|\9Ni呂.Ĵ7s!?]@S4`f->?B`|H0C{vIepXc Κ!}0NWy'=}?cf{g @:ʨ [ }%Ar W_>O"^ޜtu=w343kwv5_ó d`}kLZT ~YtgOIxŘW*LMn!d׼4 Ʃ2DulT]HH[jC } 2;;^1{ͷCߐ:7꬇/ݳ7QN=]/`MUz| *P_柟^1bSuU sړO?(?^qxߡmQ˯noHQT|czgH:0CDb;xkW3RdE{w(D=7o..Wހ{^<"CUgwOMa9`X1g%ꆍן൅x O^Ͼ:vJÈsx";/>1>:@Nţ]䆒o{6qp<շnFJLWOՃ*nrHG1=_ͭ6O6O6O6Oojs=@~sj"A)$Y7K҄x&elrbSx t6vkCëBp]OA? GrE:{>|Vo6o~2AИt'`%L9ܽw $gdx-ȮH3=g~Y$Oixa 2ǜA@ ClpeUU$ ÙJ|i0T|T*Za/.+Ѷl\*SkYgnTK`f n3DZg$@^f\$J Ce퇮4 :<*`FȻbKdϱ%!뇚Zk: JB\D_9R+23![_)>(tf'@m~ 4hSb8S\R(e)LQLG".C;ݻER"$VYjK&f )30cD g0X֑m'e1&$㲭d&.ZXH&,=6ĪPV6|rL =&=_evy|77ER'X7|X"F<\˝PfAPX64cBpJ@*Э:H+Vm TnHH^ǘSIB[>ӍY-'N`*m.]A%nHtA@`Z aAL )yW3RPқ^ҬR0%9iUsAGtW0%Rw/>6$ ų]^4 jhZCUan{rd $ʶ_ S⩘`Ur@#AsgeMWo.K2+mȿif+9y=($R <SKR; 4H./:T CX*SYqe9?էD^N,D#W:4!p(KZJWq]=;ɟ*gnha=lUZ&kt>]kbb( qxօLB^d#O*-2ׇpמbr%[mh9̑I}ˠC<")CiDsd|ޱAWBs`Ѐ|ޕh?ۉfT9A88F炗bUK1 F.2JIs((rQe26+9jy8{QA /R4.25h_[9$m%̓;Z*$ժ"m0IP%Dj`爐tFP wɍ jdgiaqJqݎz;~ 3س\ &ac9b`D0̀L' b+JSY)Ei ٧ŏդA姤d&6Ȓ<=-uzrJؠU BRdz`i>1iV09xe?M"N6 #vS )43zy2$;>&,vzv f"Ox-ya=,a Ń%B[`<,;`=f1h1NMpu>" hyޱAs2c]i=5Qi"xU?zswF9`nBՏ` |o*޻qlLٙ0# .3>Yq7LD/^F[XF2=t8Lۘ^AGJJF:1IߵLF{GE֠3`ϤV.'Ura)l.XZ-AyJBO@cS>0%3 X6|1үg3 'L.'w]yFnun%)ȄmfB7u$G%.I Sl8M$ y 0]zc L  ڟTչH`Ҵ_?y{w@\R1E2u>RG-ΩNg5c1BųS¿7`R~A1{Ŵ)F^aΉ>.68Kj2/e> }6s}([^*Z+(ģcހ5uv$\7lstL,[ކlP &J_ޱAԻ'ܑt} 27籤O\:@l3~.a# ͦ &4x!gl̉lbB'C1p+X)XbH02ejf ~"BB$oϰJ2٫7|bݽRdxⲽmHmtK2!)J};bNRT 1t!"Pwbe(=D:L[鍫9f-?'y{UdA%RmHoA2v%]ݥYqtzh۾ 4bB5J@Yw-UWktŇ|<Ӊ,h LY!H!33Lruk!Pslud CΩ}HsWĦRHY+nn`NW13lŗV=NU6 -V)KOYՏ"c`Ck:c|bi)\y\]}(6"UP:Ÿnx4v_ Mzvb򭄑&xi}pؠ \ER<9J5>La ôX²a-ˢǖOh5ziG\ڻUhr|ݱA; kƇ#1"{{^;crw;_ip$톤2 9P3m&&v!g$I sIT,}y5% -][Z kR 2VnLFρ#U >伓%*G/Gd$|E%=J<&FdpڽȠ7 ıD= E;NloR_tyr䩌"*fBuy/~'^ԆTWz䡹K>juȓj~Ѧ(UVExg5!='E BXhΥqDk$"7ۜM7"A Zzk\k !TX 'Uv 5$ADңPsRQ@(+O_ ^Q2%~Z ٨fa5k*qx$ΰ:(;2\:$rqw fCߏp+<ӍqD2ޔџwu" ;-[+h~[2w^׀MfGxwct^ؼv(6oo6R9Dj3O@#;6`E~d }? $Xb:j,=G05 X#:TXwؠ#aI %/z8ˈ,Cgzt%42deriHLv5柞;2qr{Ⲗ?ieV6v5fbT+6){[IIf:.dXw>*\+,="z]^m%:v:9rt>_]]>./k >q"eHx}խqܬȑD5/_y7?mYZ3awe͍HTeS#a:a8ݞWPjnSLRR'7A(^":XŲ2  d@6ediKΨE.&m3LCȴQ*4 !>Spbx Lg18i](  D:/^Yg|FlnA)&P1y&SE%XTWaDIDNS&!WozV*ϝwH8y W9GDGSC5HC q:=804iD̄2ۅG%'=7*bYhgNtc4vGUKg GL^:IeC"VP΂B;&3k9YEd5tu?^.%Q̞_/l2;~u}qvK#WڭĈawKHD9 ibAC^ЙXRTCV [«l<׍2KP[hý/:Yj*E"\,H3-w!\ ST>»nPxhIRRz)NKn3ba{=nzGpqa^\z۠{#6,NS@%$%'- .4ݾ|db.}ytWóIۡ%=hh4 }vȜ+aBg֌};c? zsTpcxt uQk9;?[Y]k9Z.Aa9M5+, *ysp&XeP0ҎGÇV3'Lջ/wOj4:^^z`~;D2Yo@x~_>9QH br YʍOCZ. lv?\cf B'ۨͱ-ДئZaQlc:P8a$Uqr"7&Kc8X(VضZ˹vz q*kQkd&u:6⥇'^\^᜚(A%0Ko-w{jcjxCCbt6v)F:R(p˰I/-BE1asϸNVaƗ77ɻ1]xDou8e[".@6͙L?¤駤JO$8RqO9O7S};۾4}?k Ϥ"39m:{ 69AJoMxFάVfhuoT3&qNڳO9@j śQ/\( _e脀]ӆܮ]DbKzK_cNk,§RR)GC 2iKIh Gnf&w}6OOG4jkC`4׆W=N4Zyo`C ?|t&aM|ՕYx/(wyHB30jd9wɵpYfܦy~e%-H0v_e Vף;aj?SPJ/ j3?q:ە2\h2W_E~LBGMk@hyI )+ƫ&r=8ag5)g(ԔBZӳi9r*ɝ1x<M˓#1[&SbE ΣT Mٟ*E"Q7hחI/A9gɝ-AҪۈ5r+\kuBgN$o\Z@]o.rHnpSE5R<9ʲ L8Mx@çOcƒ,"f쭼̵ z&Qct<~D1f&h!Q|yɔ #4VRҚܩNOƸ)7SJ'Ji΄Ȅ`"_]ghqP>2%:=l"M/q6nA)^aR+X{O**8.1coύW&1 -W>̆9QaC(ּ9fVT5k(-۩xrHě)ׅ@IW%Ɯ}ܠ1ƒd= md"TC~K'i>οc']Q,kJ#WB6yfYWM?x}(g"U1Ses*³Y0A]aupɂ>j>Nbk&8pT(>ۗŋ^悵$?r`'5m_ijB!zH oġ3LiꙦW"5G:nDrÄMq<4˒]`qψ2`gDdst'! e] e]3tŇDHPXP b<l$TB jS$6D?$RvR|o10b%Оޜ=l>N.L@Q 0Ls1ӟHRJ |/C)IRss;r)BEm]ShP]VPu eIe,=c'"uՅ(%:R ep)df3 d& I ] I Zշ[DR|@!DFHAe/C~b^ RȪeHk)əsfT.=:<Li@*8C NK a͝-]#S2{j,K&%5.I+ʩ=~mB<,y(>,|L }w|JSQ01D7nl\f<łaR;z1!ŞҭeҟY>i4ޘУ?2'.R`Rh6KYj4);͝Fb(̉B#xjsES9O.Fd`{E#9An  TAsǻxEŊ .g?Gw`ctG<1& ^GطrKŏDIn;A`RG8l%h;v/2}vXIűf5uXx n9*.g&nEQ>HsZ.QGER5j6ӂ!zka'in<'FȼjdG.AQUvRWmIs|)i#$&ͪ-g%)}aA8a0Vd.[iAD j2!| 4mIWc6M'Zz&M@heWPTnUP>-Y>rɀvIW"(-BM}z,`d$Kڵv+X+*>E"QcVT,BڤuN,1`wk굽kB umkפe'] snp-4_*j)^EE`Пp_@ }W뷄5-DS˪S IS򌉪+NVP*|w1/3=$Wڣ5uW_ XSL'|JJ9-ܛ;>Q*Dm>yxh|'n#Rj zi '\L5aeiq@-aX8uKd;'.glʧ\( i2Jty鮶~ ph'K3o7Ú*{rWM.]/~_ǟ%ܘ?O&4i/s^%z=Xa- rS$~<nu# 4ߞߙhVxȂᆰsdUޖyf9vRP8qE.q.P#h#e_ F^gCǻ+ѽ7É_{3$f Q2{@Z%ՈÆ`$wYscaSMC.cZ LkEv ,f`xa@ċ?:X2Lv`8OI .#T"/A5qfns;rr$+tM"p`mZ;.[Lʯau--1)цͩv0QQjஐ8σPסq4IMο,'!0* xB *³\d bB YFZKIJ\cYj`[*؄eS|Hr;f)x?J&fїGkIۡ%a/%Ѽ01'o%OX?{W:_2}l*dfRtH+/#ɾ@-jE"-;qx>lPMx]_F߻BS@ʟ֝'4|f;nf;nqXፋyE. e1*76JF3,/bہZ9?%Hs1RqaSgo6_W!+FeF*3`\d$CƆo0TH>kBkeg- C朕ċ3I*-PԎ0Ba$ w#nK5>+\C޶i&**^TOQڶ|.RYnV*UFTsv LxLB1v σ/KXvl Si8wk;9v^s\/Ji3B Iv`e&BSSRSr-%>vVG_~\H_=9w[>9#7[rZg;Ձ{l$t)푎KrݖF"5Jn5Ve%H.1\03asSp 3vo`pKL9uQ[HSZVd#R\UWzĦY*ۺ7Uef6_PZSD H!*+R͉&˦v|F8Q IXL@nJgHV<&Hp3IB$`9 l[̥!kac14vH/hvC7x@`X sx5 8L 51898eaftԋh}K`Ё Q&jY]dɀU (- w$ؾ8`7k"(h]#U}!!H[4xTo!t6ycEpگI^WU8N6ǩI^S{#H܅k/}|a%p~W+'s M̊&h·dE#(` "\ $/]׶; ׊q}bhcBxI].4oADrtMIJ0TeD؛"`S @A N@\PaB)\RX[lEPN|JSP`kUw ǵYJ{QqxBRz3$+pc zx[s̐%(\J1 䵇jˇчʫ-ԙa`:Q.kIyJnPvqNɶ-rGϔc91p.R0̅lg1pUߙrS 1, ra[&z U%$gr{e>_dahx0 vJNK3|]-)򓛢dVᖵLh v9N?S? Ѣ<gdؐYU0 ف1ޢ}!>C C<{`rU PM'O?}JsS D6e׿TO]Ȟ'Y|V3uQ1^zk o2b\)(A#p\Gdע?z)ޥB!&I~AƯq'HFʲ@*˹O,*mc5hٚzy\0XH~vYykzkAO"mf~jtMKT_vUa˫ 95F eG)eMjUwAieuWO2jyrv}FaCv8_F~u?; fdւ] 3DM(ُO?~?cFL*F+_ߌ}q߮:6bϣtUBX[0*\-Fkm z@;@mo.NL=q7Ee|Z}$/UlvSaA"?_zb  Bӳu1K1)?B#]F%ftK64T E+Hi9I[ga8:LYĤ#gio4\>`]L&VBԮR͝]m`*HaUc!8=//oѸc~|/#?*K:XoF?|<=MnGiN柚ߧPm&QQ )JJrVh^IrXI$ReE0M!hY)*?N|jDKk,bJ%K,Dy6pwT-_^2ZgB&LU.hp6Nhl bN~XT2A8#K1DPN9>_ I=_`B )l&춪vhžt-DDarnMgo%>Q(5]CY[Kmԃ"hn:׳Ӣ: ǛzI"0c_oi+[gMQYBlP-+\EQQn2nS 㽾t@ "!B(tH[ϲ$E a ypAPs°Թ,$V,ϱΨFs!9A&x0ޤσ}h[Yj6{nM0nW!Z>u&b=<+b١iZ# :&}VSHWvb6CO,&Ep=u 0%}1yh1ڣ >8=xp=XvEޟ<8 wKzwU!1 aʓI4.i\іƁR G*qF{\2{h\򢩸_ fܒ*9_R%F}1FC)E(r 2gT}iBei4,REf%Ml9[-P)/4 "h:Uw5 ؕCƀpJz65m ݯj5-(?vDt)qoC$xXJ$8Kڿ>O5(㸶?x bpL".µE%ΣML!DsK2&E>63@7VkކRWTxTR׭aCܶ4V\ko;mk NA)Y-N?y/p++U?) E2i]9)te`P{:U-|y!uI~M^0b8t+ߩ?DyAq<;Ju)֒GU &v6|^׵^%Iqۨ{t<_~2Vjh&U3дgQl` p%dFFrv4ev[VNLiM9wmX‹ۮ"I [IE)[4RÙ4-{dI m <|y!y g>Al[q뾟ݫCS"9γav{(z){ 2RzN\otg2e$ )hH`! I-Wk2kM5iR* Iɮ 4Ohtdur/L% ^`D#yE5?qb h*H?M$)8$`/+ע8FO߷UC %U$=jyl5X+IN{P ьA#̓/C'FѻO^X#Sc$ 󎉔aUرqD3g!uy3qHX&X *EogA"6]Bpkٲ╄ ?fwV rZko12:[Z(H-cId B˨|hdƘ@}$3 #T%Jign^0ޡ=A3֜ JԈT@&Zs08t(c!u4p -}=N4+2yyߌNc}M\E|lj$ *5ݘc^a߆њVs%NlC1\=R힉@_cw%N߹ 1q#1 Rp"zk;'_ye7b}W®&>&\^UW ~K+I@fɓvqU$ E>u-< (!INP㨕[dt4}i*ᶄ+ShcVo6$\7cwm~%hLZy8SwKv~r dݘsI-QK7$3]Y<q<t;KMy D:<Ƽw 7^N\dU^8+uϧ~ڜ:@>'OQݤ"_<?|'1/vE;:iwvOd_t6J~kYU )v."Ũ7U}=x7e>} ~G]nFߢBve{-\us^\_NնuP$e"%,Qi|E -MߨfZjnmdb 085s(}+YSk56A9ˬ?NGYΜ(k2(9hdSdQ%?4stH?4ܿ }7mDyB.7((35O !DdBccŌÇa1RGUkER@[σ,g<tlث`PqwYĀ2^$5=< u:BW*s2H,R 4Ř4DBwTipkP)Ư>%nb60 [Pmt<0emoepEߎ}yqW [kǭ]xz`e ~:zR:GZ*&nK99dQ4wzc3;A:^с3,k\^jPd6K=`hK{9ojeEvB)o5|G}hўNF1q'?#L!pKI>SQuŁ;'`n6/,*;(5sN4sOc9nNϻ K5sS %X"= CNR҄9n5 q֜Ka @3P͂.dQCH7 ~RMsɭlO\|?=5L}rS7[bc#IrWQ'1fȏQn>! 1o M(qL.7HW $ulhv;;}u{x422x%8NFLKu X:Gנ)8de&'~KxMgc/w,}}u`m\m!m[ pVpX;ZrmnO3pWj;HgBAN㛂dkhUR~W$*?v|׽i@~\8{`q r/D1BcYfQ~_-*7#>nP>b0qAvI(!Gӗ*2[{ۆ}o/NNVB_\'Pt}w*~-Wd6|Oޕ]:]K'${sŃ'(1eKCuO[p1\5٢o)Guݥ$&U5U"$P0tO$~lۉ%4}d.,}x6t Hf/8o"N 8U܁;#7Bn<x^yyyD]Ls$0E Ri ¹EUy~M.«kË%R㢟3_k0<|چlfM/x>}{[_ v(pDϾPq' 0os YVW~gn3=33}Y Ҩ WI3Th(JKO%?!_t^!]ϕ/hr(ie=/AQByz^ei ׎U}v-KX{Et!*tGm/D{=S=޼\5&!\TcBHX1 `@ TR |Mpi";TYЯɇ4LUcûl%kvBiWBt5:FI\%d3L%-ӈ % ڒyB˂ɴu)pZ;MRũrYB`aNQ65 m:|c*Qt()bl>qZH`^BK*c,H]9ͱwF gXUє6kwBp9w9 nsOq/Q,Ei˙J&&^wk zwr:}X %A&%HHY{ G;B!q "DY݈l=/_E쌲q4 sw;RHo7zDŽC I3N־+Hls_OCAƜ!l JH!X}1WJj!Jfv%)s&BLmd|%$plG B!<o%Z֚c+4i S6hS~fhwyX.?% V 8mִm?Xl<YlEwԵy(򯴍Zzi+TMRKS-JLKeO$Aޱδ+=6h|q#ɸWr#(S|0=+ί&|}Z}_.l:~ ޶]jnOv12h3T6j(&T}5Z~tpOQ8k'Dǹ8I6Σ:Òij=*ǔ ݣԘθ{QxbxΓTx0Sa[ /$&|`AP#(C\!%t $L`Tv ab%Q ډ,~0,$}rmB-IMH4,P4jea䄦Ipo-ȉt-)s}3Bzf kZ`mhéiD;B2[*Rm0J=0nPЖ p Do`l,C{&gC'=rX1Wӈ dػ8n$W:GD@HE+kez8eupHJ>&oGbUأuH|H$ț5f#rNٍe#l4@0aVņ; 3orGP_ZI&}C% ?BE_;!|gHj6X[m#n@0l$Fx3+ ux3FmZ7KW}x{hm߿?& vawb@1aɮ5)A 6Al1> -Uq?'=2i+UhdZ %M(ۂGG-65">cOwE%;<b '2pQo\>RbXAq[cR ;* YGTC:.%e5lh M!6 /Cχyv@"ЉeDNdHpzu]h^ HDJd6/$!&4*XDS̙2gcjGcj0}BC XIB<=kEX1z0<GAEZEsؽJe ֏MҏOIGȻKFi݉N.bCJBa 9ٰaO*2$5֏Mҏ?^?@'e?N|u'Cbv ŗoxp~Kz vE:=gW]"(_lu4qÂn^怘|wϘ}&MM4]d=$6q<W#fΚk$ka"n\$!mq1vjƧq1I("31)!:wvv4it!Mu,HuQ+zTzTzTztĦ)&!wqeHݨ!,T$dQYa JHz^E2I[=+)yv6)79ޞ;_^EMMMM.H&≠B{+'NzS:MITv555UFd6ײY f,B/0D̢5h5iSx~̿a^=&uL?k0͑]VJjx輪/JL@ -w7}3;&986\Z P`\s>L>V 9SMyͺrDIF&`ȅ|6Qi<*UV'4F|3 Tf2"$Wo4Eްb( ēl|Ձgi3:f a/bEY|aPC;j|0]2YA/G $0,fo_6hx*$ٚQ tH8$͸HQfx[W9;P,7/ JxLή sSޥ&{.rnniɑYt{[{W'{u{ lr졜N}@aynӟ(, 6B:vQK5\c4eDw R rHZTH8N|3*^oOuޙ4EFv3p_UXx-jaٍ~ w/LQ?Q+l]3R79nex 9fsLR!I(56~'-&jlҎA3pŠM9b#aNQȇ&\d|;6ZC;iw+aGнȣQBu:H/_ q3#~-6N+33SF\c@D*YcuA9,q,LE`JY$儃'Ë +%HF0$U.ѓ>Hvg m{:Sv౺ٛ5jmW%4f֤q32sa$hںnɪ4lĒrlalgԸjtr\('EghetJZϊBI $Bb$Xy]y:&5kB=):i[DAk/)bhA0zm +]q9%MIZ:/fjܴSL GJR`D/$xs e,Cwg4|&l#j{4o@$a>mj/,fAB\\\ O,f>TIt|.ϵo}p\tifZfWOݎc܎0NE==[>2 IqMc[#g>/6祓fQ̇oS-Z&_.˺(B/73I"# ?_8;g?^Ͷ{sճBR#b&u Z&q7 mUg_7m?-J[gh/Oi>wzY5ub3DL.%`5f Y밾<=wB ߖb{%ҝK䛁Lx]?m~;f~0e(͝ gltщ_JiyA^,n>⼺^ZwWϮ^=k"_.==;{vr :(c]ᷨ?=LABm*f6n1=wdCEQbebLH[[-dKy߶ϳ.Kչ &N8rJxqs%@Fv+;:9.+AL/NHEƨd@e7m3B e"R6Q+Vm`Gm9j;jB[LY/5ZxHF$1e")葲N|ctඤcRww?+je qz֍>Lo.$a+; "? }!R# /A[`g΃qP8b{>pE먛F"1 DAZ[3=p^ s9S^p;oWs^[>9!O0rЌlE9ɀ=V)4coMJO=9fӎanigqy3/Vy"0ᇢeK;I:Y 4/f^-> ފ[yEv_\'0' &Y #4N/L2V?2Ǒ3kđ&UnG݁8d:cH%1ZgקD]O ,KSW-N[NX/- L:i5sX˼qtsv]ˏ%WF=$> <(svM7eg~?(uv H“TWK!4/0]%eъ@ l|H`3Šj6{c78KLPeTFZ`De*Tr3%] `x#-)M1B7>PoBVBA˭[jYRP: *Z2ˇZS*'N-'ЀƌѡJ8G6Tjt FI үӮ=D噄1P`PVOz^(ZL|SSʽM jlCBi,7y<`w`_\äVĝDŽʊJytc''K,zv(,()1FbX>q5iC:~|QtmnFJtM7@t͆ck!Y{sPL[h6$V-/2tޙi^;„գ7h:Wg|Ȝگn&ocke4"sp+\vVwiۍWQeUCzv_>/TIC[qb}LOG +\Uʋ>Y6o V\hGg]S>38pXȌ&NDytDG>fK#sJGh7c2]pq<% 7=:Z1Wƞ\܍RcjcfG?P(瞧@^@gNnTV8Zø31YˆtM?l}ktOs5iszq.Z'ƴo%>a&1*jC3=:5ѴY\eu8MUq~ 1/(O+ҙݟR w@jLڡ'/tWt/)&j8n܅lS މЭObPT)}(_j՜m4&ؿ `b=W"~F"2AZ}Sj7Kj6 kXoI& 3d02fs/&נ[rtUs&=h~( (O2B&]I~Qʭ5X} ~<մ)7UṚG A_ʞ 5$@VLxV--,w|#'UP҃@UO JTR-^7{jԅ/ o?' mɗϭ&:׊(;rma8.0%5I 翮Gu!u}^%M/Cvn_e7_j|vNI0}'-kWGδ?(==N lMO>i"(ׄ:63 p@} 2b[xz_ȄގBEӶs '8>o&r2R^YΏEv!ST )TC瀖ά%&]'N3qSULǺ\QjtЗVʵ@R˙^IƈȬrdY3j{{n]jBm7s̩F[E [qKGd.uݪJJ( ͊+RqAjBa*/z`+BZtPk}]o^Xݽm9<"]-hPiHv@k&mW3M0WS6Sk@9@=^6ԵBψ b(yn ϗ `t)-9ЖYG4!v&%(h9ڒ3ŏEQDvJ tYa֖F2;bC%0ޓc\]h /5|)<ە+? y}TdFW˹*Mr!32{W8GV bܙ1NK1)r.p(3* 15l~~9iR,]y|Eos—0{ %RU\ɜ]d S@=;z?ޜwg?v{swN-sc lX:O"&Rb ګJoU>̶ ތ_ qz̛ʅvz?#Z-3__,o!7T΍fW/Opqh&FBZꬵ/=k$90=Lғyi'ܡ^ Y|HHm:J;HƁ,ek_u1$O2 *1{Q૝o_i5zNůY&Qd}~"H/W7 *>07N5x|SWվqMzX+;4f?ӜǴx`z֛]ߌ$<͡T54FJCDiL 3bb!14̐RbN80/6H"_,/$Dt^9M,4Fx5Hx82" O&m HSn!%PP6p$rSl^7o|[o^e? ]*JoݿLLAsc7#Afݠ٪ɞaM0{KO}MeE䩱zC,g&bց&[}!,-.,Uci[|ɀ/[`7w>Ͷ} Ӂ[YP#*gzX}x6LgKeX+"c|Fvnmz#QK`k qTbZ4Q4I0wT~ n~|X,lQ9J 0LJiB>?t=`Lp`fyB|o=v8Sf<+ \-†aap7G6aaB)5x@ >Ȼ bB̹Z# \B\908fFaf+%#2 } "t@cXЊT^ߐ!DC c.0sSS` rгZ,d=ڜ|xcpNweIŖ٤ j&ɝI)͊?B^S:c;f k'7|V6QgRh9%7eg\yZNݘSBes25{*孝^bǯWGJ{Wg7xyf|_~<JI0ǢeԌV9! 1 7P\TȩYR Gzs,CԸPOg@u[YDZL44O5Lu b \{[WT4(!\huSLQWH40-A,hY-2OmIx5SQ 1Q =9_f =!X(,Mޗ K#BlLJ*y=Gp,)`jI,E7[@yZ\%6PόRmDԤmB(43J &B)³ΝGVμ'ܧzM )!"죖D{K=?{OƱ_!e#}E_ 5THJx۷8WcPnX>v[i;D]"{4q;fBmgޓ [: \@? }c5iRB$:UUJR=+&Prɐ:;PHWV 3q4Wޙ1Ms枚4R SIcb-GqKFVCrGUP'h[vk\rw;d5F&I v 4 ^r 6=6΄(0(eC9II)jtFixXQ;z$K@h_6` `izDe"71_&|bHq:i@CP&%$V\cb rPd (e.TklGb~/\w;Q@ł +M#zV `+Dx5Ç+@p]K-rAģyE56Acυeg%k,2ݔeZ͟zb'NM2rNZWz\J uii#xHTj0$[Cxzӄ۔;1p~`(-q䖅-3p΁Δ.J X_"]V䋰 dX1As29c5$8K V%:RL|j8%{G1zVaWD|q<Xhg[™*Ed߿_S^QP]յ5m >?czɐ JX<=^匒P~<鐗q̧cܺL|Hk2eGmZTL( 3r!wvĪ\mcFi YеCøcF5Tp٥#8Ixy1+6o"V/D*kDJ)hzW/:x Q6 IfDLT]? 't6*lяSMEa6\1/em0d^mdZ;MU ||@/d\ɄbK+D +`Vp kHh/H'YW DIO~]BpQN%LDǬ9)YK8VctNf:t8 ~js~E=4Hud6ʆkT|seC.\1t4b:RI6CLιl `Xrd\9q Avtݵ,ʨ`mNݸNYy`<k4/luB6TJOϩ{Nͤ˨f)ʈ;Lӝv0AQ9=Faon}`6YlCزؕ`jKg*C 9lLLi#hgOIrs]m7:^vA`A5$-; wo "GFVJN8Tz X?o U'o|z9 @{8o}x8L9{~ƽpKrǓ7Ymg^͋f5)۹d|+xsbgV?vV׻K}9NJ "c2%-TI'M⬵ ˮ|UH\-)4j)Yi$UhԄiSnw1QM蠻J*eWl^ U8r:m6&rE8+ rTl6A)"g:f 3);k𻒣tgSK8bk:L+um>91Jzc BCN*TdD O\*! W(=':?˥_;/;n TwfKjӝSq; UUtTm"T^ X\[ڲ*drvd:(oM9sZ->kӅJrƣn ]žG@']9<=,54¸$xGVB\_<|~LQI!6J :XѥlpOŁ -,'λeb|צo 2ԡV09}^Nב, >1Lzf<bBBtIKXTП#F~MkBX 4懠ACLB >jj,10FNBؒⴅv*ү½APonDm+:T ĿsMQ!OQgiKeT0d%k,\ɔ/&5M Έi+$P< $͗Xt kHgFD-[ j>:I*i Tp.0v.( Mz>hl8fkTkYĈmV{/Mf"Jwv{z GwӿGԇ_MZ/Q)oX4yUp64k-WC hR%E[{_FÅn;Sc;`(I%]TӽY}0QCohIϖ;z#9*xHSgV]Y !8dV,;t(][a 'X{dqxWSh$~Yd+^RˉGe! -q&ܙsbEuPnXptدd$!kTNG))oٝ[â i0 g9iBrFs`) @ !4#&p`.7$i'9ph҄y.1k!ʚC$ar@mN"3i9(]Q2cchY <'P a}NiNs!ˑXf렩i<ATABDumxс|3VfWE̗5f7_- M~=ybxw?<# lDޏ?!<Ot g?=LsoƓ H Ïn-RDPB0zۿaȥVe.d&ٻ޸n$W,vY,@i @59읙-$/n9G-#ܧɯ*/}@q;۲Cypl`2,;-%rx:αoO+#ss&6N8|L7X ;ϏS5~\`藳|n;mnG5j^Y+!o.U`%<v'+ExQ;8^oj%rQ Lɬ)R=mں䓐Y!"b6"95B&=L:YNpsbuјKv=[A훳{áDE-}QKD_CHXa 1!l B'Ȇ$&$TNΰ0T1lDf?zhp"wWW{ǿl%4rB1S'Iz^o#q4Y$ˌ*FżQmI՞1t˖w ɶǷk{\oz5:=JvhSaߜTTƐЁO/pO=81wOcSՀJ<12=kG*2š9#'`_\|Q/ŏb!&ɼ/ֈ6 @GM`̿z1 %0Zntgt~2+QS@CF YJ%R b[6:L.f=|pSlUb[ b-PJx>Fڗ`5rsC*TY u_(8;}б[)g)T"D/X)`EXJ|5sJٖuTX:#3kM:cgNI5aN|F>bddV&t)dj(V (6J\NEJ V` rMPHV/JP_bn%z"X` DO(rdpl[>C+̎y0l)dRV22!5}ę;hh^{B?=0"N8آlrlūsDEոxdڔ_9zG݂95ǝs% 햯mo;G,QnLjB{ b sFb3{gt[3Vg4rF2n8[H; Y1=a<H(H" =fab<6-Q5{|3m<7%4ڸGM: Q<ƕ19i {t!,V,Q a0 A|np9mx͠0y`Juњ $Vm坣);t>~_MlL NX9xJXåcZ%>+R 2:Y 0V?$0}3jJQɁPGa^u,XL࢖WDX)OamJImX}*Y)$'k#HJ|2h*kC}&R0? b)CoBЮƯcWH#=!"|Õʬ}zCه Є"8EXRQJ@|QA䴴h0HR aiGV wsep:2u ikI*'@%NfC^T!m4Po4R;&hrߢyPu*ad:Xfm`U^zXF+򑛋+^eK>Yv ;]ɬh@iڎѬ1qxUi3tW dAU>n--i4*8zh[Jnz:> i3&T.[C fO8z)3jv.iYJ9z-hMMqzDi՞1-xܐ8&G8 os-a91$$5d=bx1q="&)%=XFǀzX 1Z !yXU, ީXdu;1u%^&i6F$jV}2 a+9ݫXj\:g5sgi:3š'7|I/oP/oz|[/E9F-h!Kmx՞1 oy{!Sׄ~qr@[af1x`TAX {P~F<ϗ}AU8%g/n4z?Qֽ߽xzj; N ț:w]ue*Yl~1vsp" &r0%*Y/흛|.Bi#: DK&R;II.b j̵􊾧Y6Shd HJvdBrZg"rZnUB+^wTAU `{m'Pp37?h@uLh,9߬q}_=/7<j=+WήB8>_= ^YcMinLU EPEw5_-w.W?yFEp_͈, !e>DaQ3[QZEg"AmQn֍mn~BR:_j0& 2I?X9ML4&غ8j7j1WvʱZ =1-N(TNgϤC\懥ͅRdMA+)d "@4fk vC:q^zo7+y6j>he6+U L~Jwb %)k[QS)#2۴$> ޫOUc1xrl?*P.@b(P}Uf۽uE*OhƩs J-^t0HIVb2nKTǷF],3T^ިҮ5gӅ.4(z;]HS&,-FI.uhh΍{zq=rFPj$sqG`*}>?~f g0߽̏x_|^*T/[~X{|O7^O0$$,ʰ1-YF":A}9 *'g,THsFQ@*1oӆu-WcXIz?W< (PAڗ`5rÈ!*QǬB2(h5]~ď}~}7=۞ nTxdQ&e\޸%3M#1?0d!@ -:fSVV$:}Ɍ! Y 'T+L Ic#4J4db9_3'؉Յ:!X$Oܬ3zzf[-[>>c>-ka l )h3c;9:x{]\{z(Vѯ$dsh5Rn/ѝI͇nzvA[2lTVk@:h^{@wKQJ-PcUSuf4ڣ^|'ff'rawp{@?q`H-qTYN;VAWqrhQ4rwhYݝ/5lq'ag_̱ (,,󯤱/u8Ir[axw:tiZ{e:NIK2Z]f "DrCl - II2+s a6ǂiw0 ض3_Z):ۈ'Kr|A)S(6/[fɘPnNvRʬz53wȳuw U5k 's~-f7>ߤ:A8h-Up F%~} PS~^n<N@ = GY{Q&({Jר|uR?eT$d1E^N.Y4Zz kO^-䲵oFR%}ZG˲չI!ʼnFQc R,Q A.=iug'@9\' 9(]ӭFS6Q*.b]khqY>zU?bu>1a$'&,6{|oN$"`_B_gmoZ[,hڳ3 C r~Gl#>ų+HՊ<#Ӏd>6l<%rodnvVݚ YG' 9a|:$ 1 @3iick2Zr{f;*Nx'[ NGLxDCy:.^ {p (pkn磦omvLm=U|{:Uɍ*UQEkvfj6xc%Tڤb0VУ}nކha4i?Z.q;`H&͒T֢נ=MkZ&eP-%E#>vkGA}6RpKb5;)N@{) F.Od9YmҨ2RhdFJq+{^]v>AUa^wA P1x9.@Kl>dqqe#4i厏s m^VUݼUk鏻~8`Rw鰕s3 m bZ9Yx6mgȣdOF dVB'tDd}qf~׷wM)x[зQ#Zvz%:b5$ywDAv']\jhI#o#OL5Q=?Vϟvͷz`H+둸8j֌ѩ gf]r 7sT=kѢȁr x'{ ԇ}:QS#9-μpxO6Ͽ05q [L{;+Qwg%Dݝ2K,:}9\RG61yViAy}Q~}_$I;"1.|\XqIȕ"D$K "1pl9|컺SR-CEpnM-ľHhU֕0SRYF8JB+ cP JHZ:ύ1@2N% /g鯅$^CoR.d57*dwp%*vwODmoׅ:AJ 9tdiּc!z'.]%Лhwt&yS"Km_jtu T=T_Kxz͊ƟݣӊnP&z'\=UJ(kJ3 ,v5Y+AWqg%"\v.B;q*gɱ9)*3E&mPtVIO kZeUrAi麄}`j*3#{/ӧ#Fٱ%y:/5rTx',e6^N#ؚOVX}9YR)uՇēg|LQni7&/l[ܝɵ/R<].1yFNȀMnl}<FPXC0"2O%̍ iEh|cR[s75obqR߷*#;H=:Thf)|XN.>٩vJM5eL 3ͼ5>ʂ4s::[.w_ܛ{ror־7iNْUȬ"Vyzzt)Y l+xU^|k6tXۣoX9u,ޥ>8Ӗr8"Sd91c,[DN .^0^94R<2m;@`4r!LcE}p\`EW[t"x0xh^WfPr.YdQ!Y 91$9Hӻ jM针VW{T23jM)I3mZAicBkJ?0vD\8.IXH8M;$Ω3=:Ei䐂 GXR{l+]O*Bg\kHR ҳP.$יkCQ"sdoC?^V l*(HTRh`@ ^@`tdGi| Fpxam'Ui lH?_ To\(Fn+-/2HQb mu7onr(2ڙb7h%Eu794U{4=Vi ng`xfJe*K.2U6y][5#8_D!{kTI_RiЧj Ԍ2aV(ѻ 96q~#HY'  ]OjnǤ{F+NY' 9LLҗ01e&CJ}DmQp4ݰRn=+{m% |shveDK:h!C-6wPzD&jYۍ If=flURQk+ngv{9foŎ,-QbsR1|uN>]H'nyD" b@At#K]#kdDg!K^a~\LV͐,Dd!3hVl2&+\9,uYDEl>}X/}FtyL׀ ]y9h=w =( r UF!唒f YWU)5i!ű0X&l#Fwm=nlŀl8XKrV34vMj$JDRMRR"ÉEV_uuuu :M9P BΓJcGOEtIǓ?e@_-Ujb Sgb`!ɣ ÒH ~GAZ4cx R9 | )*4gjT8ABA V:b-cRJsP T_upPn8lǓItb8#wvx^V1Z9*qM."VlYUjY,1K @YE PA^Ε$ZLP i0PfQieĒBNHEk,`(Dc2R)6K5-P:DLzwL,Dѝc(A GiQ,1cbW{DA!. :6Ě%hlT4VlD :]B! ':5PJB ZI4 :uZWre~.[2׬eLCuq^bj[uTGv)fh{jp+ـt麇B]+_a>C7mgFaF̕9RmQn'J' }>3{wk fA5.`LyK[qP ,=cp}l 4 h:5oP4ʐ # ?As4'\__GfO}5O~љkĹfMz;D(`8x7y8~?wFߏOj䯌@P"c`'_biffkil;8Hh!s>jkz,a 뾗{7QbbtJˌbX"f4Y.F+I֠'\ސu u}u8q/Qm8fUYN' Po9Iw}ޣbaV=GVJzQLj&ӧEq%Y`5;u]L扚]R6-*"ݿ2̞&.' R }ߦE04{I?D8C (=6&ZKs%%2YS@eG홚z^F'&..'V' s^xr4Di8O 'A/R`x Sw_Ϡ{g\Ϡ9} lܰPbb@^㞼 ځ˞f7omÏWe('kz|YVGhpS z쬏~;]w&`rY>(n8)V>8;"F12ݲӱs;O? zJa zYNlxR{T^Y\:Y^ Vl.HwMţx%+a8xcl|8 ăZ"ZhI*CA-8s1BLJD43 %$Iݽ~pA^U}'rb#kW7 K {0~x.~g拿-ߙ.Ã>X&[ !/1C q!ֿ3e7X&#NK9/4: R-سfmV&wՑC_RJ(F޽yӉdK"Zj(ޒw" L@ǧb)3T۰I½ːFo]^&DyhoVs7 \xg7tKp{wnY:pחW>w~7qPTHi*,]c댚1#d$3`RꟊB2Hqe폌/!&Se] $l?J y-;ϋU^8/(!iz%:8x9qCξքeqT73'E׌D&DA$ac b#SjjTh0Y=wTK4"uJ#5F%\AFḯt2 3UȘ{YB43Op>K9^Td-͖4CM1W']ol!х5Kq*DOᕽoB7U7ыUdǙAa9@ّɞ *f}w:l \'B{ 4\o6pCyl]< on?39M;<.q QwE(`8xegA~'5yWFQHIK$2T)a]^Gsb0x,_e>|g 6Z۰{U\CJBU% 78kmCu5C ms9ֶ`wۂQ[0ߴOC)>l@pݩA!ݹ!NB.fBcq +抲B۲1N+|nof&&UwPD aj%q NC췩ƌ3DP&j}fjc3.jJ2\ş@@Eh:jN)iբ+fЦ7f@vP}]'wpǀ2.;C 0ؤBRlt!DXv!OF5" SwAGF*1xt8C_w`|*o%F.bp: oAP S|&і7e4J zN? Dt,(XQ-&AfK Րj@a`K0W!@)3Ӛv2Q샢b!\rbGNM,`;  b̘  r-B-\CDŽSSZ9-Y\(&Kg*<.0B l9;>ͽO<\ܕ~6E)Hd%;澉 <ޭܹ m{gʸ̸╌mWMJRxUe0 HU)m [M;oyT~)H/no:I9f9L%R3I#(wFS J BC{%[@@aHQ7BCRM!uώ^mZUmAkmHE˞fW?`=H90j+(Yd'(kHq /6#tW.A]YF1C5;w"@@hNoZރZЖR+5Ԛj[^ȿ(WL(Yrid+@S6ggc"VhG|C]i|=rHra6?%1Ϧ0#w`v\&hdW/_,rcLJ;lxkކ]8ۈNǔ.q3er.B-c0x(%wTmdm{z~,7m/zhCmQ# W󌘤gpio0Վ1or~- P|lZd-+t+JV1=ʴg|3 ƾ޳7t^{Wk#:}U ,J GBDqiߑY9HeIoy\ k „g6 LvG] 3AvCh.nپ JwmɾYa԰RAFQ;0ЉVQciÎ= RK!QV3TJ&Uyd2Z2zz[ʆ uu#u8x"զZ\N^溜9E6e/Ӑ)e`w f ꋇP]mE ^1\[9_XII\2.QAdž6 l!M$zAB|$ggV{ >,`U8s *-^K#54d0rhN 3 d-P3fcΕɘ*ř[[%S^ZjcuKdS%Csjp=ī1E2SV}0LI :*`?SW9pQ9n7mVhdi>}\x22q/ v&5*hTkCdű-$ў0."tC*$:q LrFk"sO^ غBR Oi/ڔecV=yQ=#v9E4>*Oܸ׉N0*wZ޴?1 7@\83p4] TjkDl42!US +.\bő6˒Vh-.ߖ)s2BTP) P9rηWdn] } CvWN\7n)[ϻreك##Y\0*QTx=oM8WɸkƋh2Y,, Z^,)WFƗWk3Aa@js-؉r 7Siz]sx?\oQ22>cy֮8=ޫFmݜ+hP WM0 Lj[Vq3`Fcq*ϭ 0rdY0F׎C渆 Z#3Ң_"ôaEAn4y<s;JF iٹ+:R_mH8GY `k a_\ -6{oM:옯Rr25<@ 029 M4|E ŸMpo7yZW*3U! j$c! J}Rc"6ww졈w/mGm< h~_%Gk<^/uWߞoC['EW;䍶[:Xj|疉#`w7 (ff+74hJߓaJ.4Ii.{yo닇uu\SjBYB&SedHmy 2 0]h13hUړREL/YMK/%-{ 4#fZOʽt/s//=ק7ǯ.W/cSZ}릌TJ'*竽?Aƻ@F# QFm ('Bqc~3Gz5qWPeƊr$ ĕONJr`4]k9С\RF+yA#p,{>HsFCG\2bݢs/V219<,e݅\*3[F'?'wdu?7n.Scj/Bﲇzrd62N~8yw:yxf*ݽ/wvCOਖVў8+W#Gr̆ov@*. 〭UVs&/=I)1%\BNxO7ȹSB{ F\Zč 7ּRi9+>۱Z{Q[Ixƣf{rhU1'#IDC3)v!Y^Uf'nZ)nD2'%IπdD JT9lDΞ"J/Dmۮ{nK7R r"z4'hLQI2HVr6@1:9PiMfhž[,M2,'JV: 8"f+-sc1KcAm_Lyȭ]D(@YwvqY%Džd(7vJ`٘IQE]6&S^d!Qoc }`Y|jrq}|t:É޳5F+.p rzܢ,SE9)".A1&$ *rК8CtFQГx=Y<ۺ$7IY}{?MWzG.z~Iߛ!?KQ($ 7ϏLgxU>< bj^7F|z5䇛ɻ)f{1Ι^}~;s~LUZhIӭ>:7.?A! &}t/}qK0 SE+mP{Q4ܢld_N 2Ni@dz*YY!ӟ7Pzy۰ 6Ly.6ږ<^JӚܷk:ᗟ%9ٶƅcDMHV%9dczB/Ov{Be" cbpN]vUM}yq2zjkNQy(Vނ\;EtVBf\sts_w!JL#0PI(5!5n)EK.E*)9PPb "rAIGD`)EJУeNA*J(rlY AlZ%6p72EA]i> !|Qn.aaU%eJ3Hd);pȐ 2O1 !:7F(]{|Q['I%I`((p S ]J+B)iQ%L"5ETȜ[! #]p0AAQװsK`Q֒Šúe47J(\O!ʹ 7DrLזE0ÔtXޭ@#z5ۛv*3t/N)TREJ.E-CQDDTm|Dg`ReF2_YoiyL/g3bىV$9N8uRľvdwfbY>X,2)S]50_ {E-B#(u`앐䏣t*>w*XƦ\^!’UKTn,p/L*2({J {d<};y\ um;LX8_TPñ\‰JXqe' p R&Sb&*h!24Z<s)t'Kd-cwu_% g}X/+hk:pGj4;ms{5=| ď0A܄B ^+^xٴNf=i(g5 Π嚥(k@ QpHLEr-+6 SyV\`J)Vظ6JǣᤫHDE]7*%)(caXJ<@'PM%ޯyVE;tlsݙ%w\9v$\DOn{i>N(:n" ޞ*ErJ4/|kKWaS?oZR7oįy@I(A׼ !y]j$E"˥y%w\m/HG_Te%wŋt˷<IM;}QNռ+r}Exقdf 3@qFAh/]4@e`,.{'j MHpLwAѰ^Σ48))G"bKV ,۫+;Ga+[1~Oٷz56 NaJc m)jchsx1ե}gg30-d85X<ג1ˌaù."H(+`ck͠+_-3@a.{s>Jw@{Ƅ)Fra Ҍ"!Km0EBJ" M%dSI5Is+\Ic52o\cBB=x>~k[̬t?-ض$.hA羾z,^ͦmׯȽ4m1k />l'Y('W]û꣜@ EKqwtM]3 GNjl-:Hb"&$3I Sf PT>S"7woM9'_'9,~!.}v؊5seC$11/Ϧ֛v n[g9gcY+S\Y`e`p4K4o߻'OM<{䧉v'wGLW.Lumo#_Mɮ䩳{{{rg09̞'}[|EJ3.kaL9QwEr|_~6oE™V5i~,ӀZh98b;^dz{goocƋl+^+6w*it"+d8^)HJEᣑxt =u- GOÍFTCo~#L(!=>EɣN8Ox_s qi`ºKj,3+ R橤duqAwyt]< %v}y5w5]M$ׄ^$ߺ 'sjg^?y+_͒g+/\p2)d;gz^@[w+3si}~ {?%YL LJ5ޅ垜^n7flݘYB/r\.no@d@t"{ftC ӎ !)3zUrNn\7nJqsTPTBzsN.OQ *DTUj(:sҥ8ʙNWX)׀d]~^c,JiǙRA"+iMi8*W1]usP]IbTI$&p23&gXG3 "Z.2E~uS)PH n嬂'փroTr\QZ [-Wq"grvp <jn/Ds_3mqi;9+>$ev?}'۸IPH!TRrvMd3R9U{W>3]L.>iq7_y ;XGWo(L'IIT󄦰+i\o쳻Fupy||(&hP[i,IREطf#rzV߮!TS]Mͬq.Nh66݋#|r=0a5}^ !GZz|vo]rKBK7CaRnD}>=tV!L^L)u^'gJvՖMosP';nwI/Jۋ,pn}q+g 6P)P^ qR-폊H}Zd] HF h:+csQ܅2~łM͕ PrMՁ˝^b^.N Q .MUp۟/WKǑ4({0v4]_ $K%՟dP3 S=QW@2!Q9B?L\qG& k.Nz).kfO/4]\'a޵T9 ܸaX(7vc4%:y{dTM8ܯ*E) B)D]\\ -D")%=Td J*?\U{iRtP9 [F4Q{SiItUEKM#F>XٍVJ},uN@ʄJYbB"fj":v%"a@ДPBzIǷ\B}aE$8нkK6Git\!t0_|,aXʳĔĻ '4-%*y9T!QY/Z)Qt(gTt>3uESKt+,do]n/5D|$\g)$6{V9#ؽܼ* :IsWRg?\1}vm b|awj"Ld J2ѹ]B2b>v;HuP]t'H߱&@&h E1K!wXxra1V`Uhז}׷g+-?fe2?VOvkw {{ѥzЅeTQT$H%GͤsYkbvbX./ &k;ppI41^Pc9k~)௝VDM#]SP2=mz5!I|'wm{3_}O'G R\{xP=mzSjMPr4'"i!3`E4HN+m˝`ZNm0=Ap3Ae2BcҐlc~OբH[EEbw:[riTO Unv՚TR-fPt{q胔W5kVaW,1;WGNx@jıRj$XE?%''++ ىHVW ھ-eev0ٚq&FnTAML K}D'"ꪳ§dwF0jgsP)إS/FՂAF~(\.~fbK,{@y3P6iI14AT>(!f9"B0BzDWc>dWTvXIHFic\J&ƓF>Ư .'B ԕk0Cir?ak&h;띝V180"|RG{HQaN';f!fel'o"z}*& O6|:?pq< 4YI+x k/9@ZztF/Bwu ;L8IG>Tt=RS5$c׽JQC1XFz{?qйFe8}LM.b顟C/(25H~[ɸG:r }'߿9y}­ N)Xtl wZ2 ޓdë,3^ Z,H;gr^ %,DY$.isiYI)5>9ٝYs9E֜9U0k*k0SݛBSK!n9b_l߿9C r"2  uKh\! B}bHoݭHKɨS5LJ pgk$!sǗH ]ZQßC6ֵ7We?s)"J0oqofԌ0M\ 7KEP`֠x?゘v*EhahZ9kG]&*JE3ؤ Jm2q*JduSȕd2F6hA5r4 A8FW;CMx\@rl^Aj9p }O) o7w.Oo-F"> r%*/oW''Lz0?৴zOq[~L2tW%JnBY ǻ7 ׵Qw"X>SY+ɑ9_)5wአP@Ֆ9KnA%2[lmx2V|TY3?-25WAk6tf^%)g7!(]`Ң[<~AOT{oNzӟg'9o3F764kKR2l)ULK/dȌJ#{.NAnSo%ݔs$MhbY{h/qBM'zqQ't:#4F-(f/ 2OMEN[J_iJ̭H*sҐ-0c4\qNTU|@Ak >f6kw>PoHygiˌⷃhs'5´)Tq-nNg<9DUB"&q)C<#,s [.féD zDS˥h?&]R#.KQ>q7=_72NYXDSZ2%Z++4_UEă& -gtt=t>ovO=M+Ε ޵6m+чQ" 4f:ubIz(tȢBJn}U ;0vpg[ | y,4VL̽ߑ* Bjp>] ;gqD!XMIB˾A˳!&e_̰]_Jm HA0m(7SWR0lun5I(M8cb Wг5 !Rۦ0:Pc=}qJ[2bqʸXkc|I!$#Tr C \ցfS5RB^5`)(lup"& &MB\ WFq>Q\GJQnxCIS7xG1\p<ÆmUXvHG?hP%I2qI"IO !bG` dž'?#u,@vMIL "Cy#]!0@\3zIz%}s)I YgF2cC<3&Mc鶳w^yw泫~]yt"im{F.P/pYE}L' ľ?L|aw Y3fmݎ{ZrrZV{ſC`g/Y :uL" "><\)BRoubg}ՉLl3(i]P~Kwoy"^Pٗ&WׁS8NDǎL<}J*K5X>f@ O.g K(qG{. CAv_\n?|[(z=O7@}1^}~AT(Fg8i4ST9Ժ?Cm Zw" ..߫'j7/X/$d G*"7\>M+gKj䷿eB B9բį}oݎ|0:%>EqoAa,Cnhss}/z>!6vƯAHr"p8Yr4!L;'MpϟelYs60נt27"'ͅ c?` Tqh( Q,AV# mbw͟g\~"H j?5lzµ.LSLۆf؊>`a+qjQ]1'ƧO t*bH9dUuǮc3hTe'> ;-11ıg;GQLp,O=]ʱx2#%]xB5)gRt^9$1MWfެwif/TD[[*r&;iڻyFw.dIS|%N&lAҰ{(rb!gOQNs{ tskvېѳxA:pGPLJNh&gl 5f wrs ow\QnuJKT¼FdyYQ`ׯq6m:b:+0,lDNr; 9Ξs3C8ɔn$SFI2[7ǘUq{~l%ٴL*VL47Q AaMrmBzr{D͓;y'[1UT9Lz sJaKIdjjG[c;n~C"g:AZak $]`Wand8HV,3Ws\) '&¢JإNNϭYei_+1S6}q}L9&nM.\S. P1sD5ؖoUԞQY<]D3Yr׋QxF4^-W7$])Z\N1Eβ$2To|c^+X2fԫo&sd͎7_^u7SVunlD^@"6(0!ViTh~]t.$̕. ӶH *^[4:se]k-RQFn9JP* C[et(ZMKs%5^0QU*"up\S"+Zb򨎱:F5Ff"sc6fe ۻ7E|aOCGk a%(-Y$ʝCl!>55l d=#t">8HlA. +WbT=+2ÛnYRIXOxL@}4Zus6ƋL"` `#WfVREdS-5Lud26@EbӘ-@R.`VH΋S8P e,5NUR&!SBhu,t ¿wS'?UjA$E1xcud;pULIcN-zIUú GXHLQq-i^gomL*jqi6;@k z[l^WF0;~w4:b0jTX1n@[]O?3J|kO NkˆCsY0͒8Npl ]]ts\MM5iz0e'ZZM)|zKן>p*)_"ڦ[q]uXosE:x)AJ }kC&ڎ7BaΉ w :}יl% e0b\j: de@T.CZO"=A:aF\ A ՗}jVIJbQIGbՂ?zә(*;&#E4 xlUq{w>2Ԣݱ$F\mUGm$!fUMÞ6;*knWbJebCcZ-wL&u,dflL bn]L.n+34&Jtk1nʊK9d W-G x+3^+^ҲXn }h("Y[`24Æ Ϣ QY0U9*5}5oʧ ƷiP@o`+Rl64dн&!} LUɖ]`Rmr3# pwl9bDwAW9=U\Fe 2W~U-@9QSjqL᫭1Eύ\`cJ{hin;Z-ݶk9,TXLu vtq_5-8߽f*QgȉNKZ7* +_)cx 0ݫP P,N];cn=x" ofx4dz=lhPC#bv dnȡl؈;%k`}!'`>OY ]bFR9˧̾)|'.B)+Ġf,LTSδ]7 xE~W'_DfN:oȮ'bQL![UzDq=@Ƀ|NgBs@pyCyr~ $5ǟ4g9$z`[PYgfLn>pzەg|og 6.$/ޣ7vB|Q(;./D64wDחgY qg%A; Ƽ% sٝ)mqkiݎ{ZzBA"ݪ YWEpԁXBǎtBOiÕX|#ɕ:{WƑ р<m#{hd塦&{XlcxFVUƨJfE~WP+2* {2܅ɿ],_}? ~~z7o,w㳘/TE?MFCo 3-q־"U gkmڒvG ?c5fPVcV|v0cz"K)dd`^%M;_$-| n\ep2 |dՇJx CQs֥YtW'0̒ ~ymo8e1vqta% k_m2EUz+5Zf 280%uidZyW/hyT)qb+?qAv̚LBaYyeǘaIՉ1o+mDX~/?(|^|eɄ7,#W3_d[bﶛ,u?`n*Y6X]G6$/ &^^ykIcpzu G5;/0]ZN Nֶ:*t @QlR ~ H I*(uR2@r7R"\*gYqbTr{ɮ|Z'rt˭c|B-d,}QT&XZ="DbkE "yE"9sK5vjcdu:MuфkJF2 #Au %cb)%UT69֠*۬k"J|۬zϐo~M&֌&H6mrln lr MΥv]칞Mê ,yqּ/rYwǣ)gؗ+;K[&BSv :eν9zMKVG[<9s}F˙CW:SV,4AA(A^T2f^ n4!A HLUx0( L"]:j^SbR 1l;ꑲV1'@1Bk鱖z#SM@|k-Hi"1XdDN2X JgKQZ@)U{E>x0M-R0Ux2>5H|HA8ƝN~:]SbVXп [I_ѫ|23{;;*5>XEE20X~1IuI - x hMS&z]x1@:)1?ĉygHS|ܤc6xkOaEb@~4}M~À!6aNMG4n}饹/?\%+_xEhrN4nZY>5LBEfK4(Y0'[ e!tH2QouNӔzGjY @ZSk]{No.C@Kn-w^LjG̦P c&(Px]U0Q-(S2 jV$LÞN9pЌ9Q48n0T1f;*r45Œ$0Ä֦JJҖji UPX 2E3KAA`P2AnV*~ wf'c.Xsˆnbαr#~t̒.VԋsUI@]o7JhDq[>E^}xZhBT᩿yF "ɲ /7 i6O 6o EW" ^w Za <lp(R(ްk۵}65t1|#L0 I>;OH hOn(L25r,B"CWXA`)`.k >Ne#!BhD)1;L93b$^: RcQLɨDwm^wM^IFv+J -^1kxA˸-A, M/M+ -7 Dw[)B(.eB-M0IVjP]8~R)N]UMu̡X,b@q[s6FV@Bgb&ʀw6>ɁVpJ-Rj q9:Z swb9NCP,^^ p ,ZDq}pEi58>Tw*UR&{_$4L*C< "p"-ӄp9 Q3 1,O>opica")| Wԕڥ|/S RLۑ:OLfRD8JﰓO eEZ,^@I,,S5VBJUQ~tϩ:G5Z⨧]3 ͧL80Hq9]Zt7_u:ۦYTx+51 O>d$yyR@zOp)C4'(=J)"c4{\.ssn궎ix~r7 ͢10bK(!u' ֛KR#hLi'[}~GS$9RA1vd淦iZJIUI毇&ep+譚woܬw4-]^A j"- X1 qۓV8շG%3lU.r]qh)*eEv~oUjb)io*b^*;qʐCzb\I=l)ͺ"B\XmnOZ^u;? ͨu|7>]дR ] lݼEV, FO9Pi pnwrlwN-^?W<<\h=g`sVj\eAFK.kqR]؎8z@|'s~U]5Wƻu{Cl=7*Ous;ǘp*9G)WhѺ)cpӮݺ-u̪nK,;ۢA3Sr,KN../[sO5+VpχP :%uWJ?Mj޹%ZB60n9&4f'0mT }t֭LtƗ>P _Uz*{jkQiG1>m߰[3E>^Txyq?e [SG`p:ãI2XSbŬס9k$F+ucgA=TKh˂B0:ƣk28NA[t]I;-d}?Gw8V  B<;>QzGSe׎m 4 V ]Q hg:EyZ ݨ "H0Xŋo} VR3rˤtWOV̎w=8Q00SP jym}3I+J!M,m?띟=kSWt>>.L!qɸy2>yB9Z^)Rkݽ{uHbݱséXeF;e e G|&_n0I<2^mBֺ Β ޵'bK:$]rB MHRbFRSP&QĀ8k ]+?82,fB= ԓX "pzgo?kIw<,EkO]HCHb2'.}x+E;<~m[{冦z5=s;m_X ;[^dOkB.$j%qg)Kn?`Ҳs{_d|?Jz<{qi@GI%O)-H<=(˜j F-mX_UB S!l2,Up)D.p3 *#3FQ_E-_:*mkoe5E6h)}a^W^*^cr1"&0j^ ƌ72c aF_rLX7.H]`j=1*nCd'R}Ziiň5dJA@ECLrXX 8n`rRZ1iE P8M(|E ʇL & %s̄%p/-x犑' @KU?a~~[#cdP.c?! Wi>Jch3R:踇ӄ7_鷃۰Lm"q7#;9;= 92ڨc#8{Qu( Mٛ C*gdÁـ3 0{da؛dJ)#zx!NYwsKؗmC,$et1.Dp);9uzQVI^:LB>->/{x\ CDb'{xD@`;(jTОU=HK+p240ܯa3Q EU^|Ki3ivI+呴„"N<L iļO(L7ZM7e1xk7qCڶ_("`3ю^8"8ݛMCri3f9Ƚ;@g!]g*@ُqC1Y+g#9}1ceu~^HTg/1Z$L1HRi/56Z& >KnP?r>HdžhB*P#ZѰ.'hMzj\l]"F)k5"o.?tq:{sv~X_ӀfwvϞ]=8jJ6\!]AyBfsJ(nqͯL("j/:RzAf+:?4}zQPr2&O*wwwwSo3:mQ8x2\$/7ibPpRVPC<=x1g~+_?^^/v|3{>Ԭ^^˫@uzyfq4tZrzC=<~x V ְXGa|؍ `l1\W32KpUJ)B)J}>} OWj hYqlXX+jў,=w_|}yR v,E&y<_)YW}oQ^\/z 4c͎ɫ(uh]O&[s"w#$J03s7 ѡXei0\HUn}0V JИ?Ah#=_R},ekZ1U k\)w,pO ,/ $NIKP&.y&PH 2bsfy~%2bϏE"hN`W'/O.UTxuqOmgvcbWgUrSӒ?^CjvfEr#ur9a]ٳjq6?ܽ ?-rc #%;j9Pc K·7BCQ3ڮQTK:&(Kyb @ =8P2JʯAFvc\(4#WrhHӨ5l&P8fX刌ɜ%{OJy͈s!$lRFPC)TPMR;bPQ$87½$ 0.o"2AtP2nKqHdig79 *r㬶V* -8Hm ETXl8ئĮdu v­<3Fׁ.~KЈ$*Ζ bt}$+k-?aUvsU!Rsbxoj{z (*Y;FC:9%.&%erGG -Jыr;6耜TZS %,jFD[J)~j.fXC8._=TB7,L3{Մ- "0/,>bh𛁫\¢WĨUP)+SA&t%@ (4ʣG%2\1퓔$ gS x K ~I(lHPpTҒLa:0OꄀYP[6pL~uGĮdtCv&2)%M(%.NS vAL|u/d}v,v^zvz|#7 >;q{wGߴВ.s ճpuh~^U8wf_~~RbuoМ~G|'(!"Z,,%~dˋQysuv6_yj94~ V?\-x}&64_1y˿R TQN/džbj䧛r4Z.}9fz7vgyt?l morhz}.OWy^ogzv'6Kˏ(yyZSzIn v"VlXC~25!NbOXft$1.`ލ/ۻ6zr^}|׽ˌ_^rdhkL`9ۋ\),TBT^lCIʄbr¥HpP<%wXfC]\Fѩ}479eHښ.$)3RBDɩVJY&']ñ10j8{+Gad&&A? Z m𐼱U`ji\!AQU&LRBS,9 $ %z͒K)pSQC.9E:ֺ(XjSA %3}wGmB4Ih (]E{8L> Trh6Hv~wǞj0^eML&؈2pSݗS(9C]њ_>&f*c=@*Um"%Ԧ(u= єǨUTV!K%2 mG@]p,1ռ;:^Ndޡw::V/d%'%V'BB['^{x$MZBa@2Y ) 7XEP dQmYԂ`R $O쉜֓ENk Gq zei ᴰ4<+\@B?䌢r(6jZgwL)iv\ _ZWAV Gu^=n}S 5z?%?U/ںũmr&\D6!6/N*8UMCIlIyfnM e)%{[$weySQjtGJmi`!R`-ugcl16#erpѣFPo=\h!QE;I3sOUlB yn+ӤG(A(E|r\9݊{L@cO|'!R>E㖫5A(cjfM& #oo5ƼۦH8h1@^(~CbSb|᧨$Jn=AH#N+5|MC(ɨlz'OZs O#B:Ġ48hkc<%'GaPao7Ο/kgފZq[QrZE_zA][oH+_=3T~ Ai v źţ`}OIMKDDYt'"TuC^im#ٜpH?M}::-ZisϧB`C^AśѷaTy6o]p4 ]SpUMbt;_TYgpeF\nъ3b#yA5'G1-!Z_-`G3gT\Q*hsvٙh<(  8us2Gz&(٪zގ9POh{.6eAXc^eN աvY _AvM31ekFwU3q1݄gmGedКųhf\a"hjfCv͌K4+R$Mj Λ貒~mku+H%qMuklșWaC')Db+8ORLBJcHdyj"NRY!K#%T Ѳ!rzDPdKa.P pڏlKlFX8c7`ƥ1Iӻ.Ya>('4O\*"N0ǹeYK#uO OfUp6F ~zߙڇF]x.bEsx! wA&{D,>\^ȹٹ4y6/tCo؛E=x_M?F к^gޅ ѻ2EAL,>H뭨DSvmas#_)%Ɇg<")t9*3|BXoZpo% 7rUBHo]m#ܜ7w"B1TAŤIE1iD5(!W"J1DrWP0?ToST>Ϧ~x̓ }%}.;d+rlҟ/rn?o?++0U蜆lr岃sm=P+y qsR9pn;?vFw`GسÏ 8mvs`ǻA88}RlO<'Gfz۴tN,kNc)|~GJۦ0oiYZ>$q, F|\cf`B12_#Ue`Q5^٤0}V%Te::ܚhD:-bF0Bj9n Ba,#yQbkfl3X@aquVkq+\Q,H3#vmbE6؛EBe ڣkR68AᎥ ǩf)RC+ Zh^4u<+OU$T8aD3KSzRaRB8A;S u@Qi#iMqǞp{<>y>{ûLiǷӹ;Nj{ Zۓ"Ʀ3(;%J z@}jRoaKqpT9A=YkƑivb_uMQU3jP+~ZV.]__Jlr} 2* ~:}pد>}|\L!"GLw(*(^\SK0}3mY~s1Vǰ2MaL~ pb(Y"B}3S{Mͻ)MY l^;)灅&`q#i1pp5` JӚokMOT. ^)%7O[bʂb.FeA LPA긡@I 9Ǭ 97[ ,NDy17t )"2u~8S1~zWL9ᵔ}W{ϥ XT@dƶ09:[yT,dPUbՠV5 UX*B Mkt@)b˽ĘbL G0 WĤ4Ŏx HQs}R[rfpZhi "BL$YRi|O[!zi)cD br g(+·}Ǒ ؀2n5(T 5haN$? -aKJ{r*zٓ P+o$5$4fIĐu,ieN!l%7a֩bz4+#3U}ĥ xP+L4PD L"1AD0 yv{`D6kaX*MJdҧH7X`Nji21"T-& AzD߼aR ToyEl.%Y ɵ|E5 97z]p.(~R-rkEN%);:c,;xt0/~D%u.Ҽ3Jߝ0|:Itns=f=7J!D(hڥ f9 &p'4 :`aq_}nGi<͵{}@X̭^!q65~4 gv~Tq,wh4exX!$Z+F 9˃ΦAqD4=1C J-i %CZki8\JcoI*+)1Ѳq ?rdH@D6Օ bBc\^bQEaCw0TX&/E]r~j0hnCxmY"v~O^KE#^pbHj@2˸|KpRK)Q,h5ZmuqZ",E@\l`.zO%DjEL5\3(]u|>{1fӗ4{e6][;Ō.u "W.Jr1 lB)Elșʍ2 /͈yG۲qa$7e8 #hI(%5Ь2|HnTHm@mU`o+E4Jdy5n}mlFk'òk-cn\&K@ KҶ.Q#yP)t$=2hGF[dB0), (4'[e2k,%;WD#[(ӂ90\x0"`(W-6W,ڝO٧ T)S[k1^1SKN~wRW7y2ҔbԿn 6HmQٛex9Կ~Mf@H?$4A:3tvڡR6.kT։h}8)'@#vdCN" mPbHLN $啹󛈕1sFzA ٠t/ bؓ@Qv׃g2:4x 48!6.: ^*zY (8 ލo1?& 8!⊝eLFOۈ{_^ ӻ*<,b"^oS[f\vgfS6q'@7޾m|OrnS{do%#!-ʼnV uB*bl`5@`ɪƧIBo8p/K~eoQZ* YP 8,xCn}Ã|QE[EagLҔ+0л[ŎH.K˼xYo<+n 7k%냟>kۅO,Nm[M@VHx1aJxǩIxWj|TiO;+:2@4@_*5w2v*$`}6jiE4}i L_w%39ٟކRgy_hBIW]4ϮD>0c:$z?HyE{>]>m7V>㟏l _*ٶvwS.wm3 tyJϤnAhD.7P_1Q@=tx*["&ÂKZ'5] 'Z)$7В@dCG.qvh,_Ew`x9ZVnW1sQ 73 C!?pOn.wn߇~.Pzy l{Q:j _07UE/]KJӷz!؂U%@G q zR"E#תL/NQk^e˘4/E`/+cKKxjqr9.=\͜]a'5HJ캜]oH"' 08FJ yYnVIE؝qUbwVzoCUu }YKg=@wxȢ7䲰xG!g鮠]& £~zw0]m( {3x}ϣ^_QEhyrw9~T`9_vrܟ)˷k6\y@c@-J=]mLJŹWH>,Szn7~Mο8v^ݧ.6wwp(V}g#ֈ8Sdxx%c=Vr噀#cB9}^kǘ&f8ܙ&KgNUaKAtOt>iā !53{5.B;2ŕ8{CW(4z__,Dh6A>8ta6W~8gH"^YG(㛡 E@mEZ2AuK! 3T֔Nj ~D29wwO͢~ůZlw/s; 8iw ,V?̿c{/[e-r%:-E/_A觺ofWU.{=}|eT-WHq;g/]B] jy/mXuW R-hX晳-:,C0vEYNb/u녪=PTǵg=Q'Tg"A(ROC|4HR*ɵYagm#ŧ©V1j$yBgkF;ÅjB3:փS:]=el {Cg(+_RNpJ3ftSI_}@rb0!ڻ|8FU3"9v̬\D!diEwS)A]$wtEqeWN*" gwJh@Et{/D iNN*jFIEM_닥4[dIi>wſֺ &o~ӷ{{ide6B_EIe pu{0c( e'mPpڅ;L1qim(:hiO+BQJQh .(.~M! € $ q rµ@oe|.!i'Uq vp9E6׌w"[?+ .嚙xxp.y14WJޒ6uuM-$'*X|iL9zNy~xcN8c[T*kmdM1:(3.S-ҭn?`hLO~B︺}*˗?-}[o)  yz/sss̏wD6REsEj=zTq*,}@qsLgL70"6mz(G_@٤֢ m[  4B 7i$S!)I}>IF ]uڕlѥ dk`HTQCyxRmg3CzЪxqˠ\!)j#2@5i 5A;jЊ&p3 ah2ΖA幒̫BP9jn23zw+W^2?N_ѻ;`Wvgm}# 9+ȒX;jSl~#|$(T 9 HȍD`GI JS#O ݍA(7MVqZi_ʯNƁ^ ZHNP9 9%3(%uف/T :0Ba- RnO;s{_Cj>dg2#$(*GZT,6UeRL3o)Ux!7;T᳤ZV+G+5x 7֗BW+f^+l<:cF @  WLP-4pØ2RqQpk9L0uU3gU&GRH?q)("Q݅vbҨC)S,>' RhkJb::J "eᇎ;= eJu >W򴷒4q'"_Id'(0lscւ0s 'MjpECBynO6} FPЛpd@{epVu5D0;ĜkP75H(5ZH 370CH !{=@4Cޣ[sEj|j&shY =/sBM1=Lc...ls.W~aE)e%?)!|buZtzS7F6- Q8dq(a2)7#4/e/67Yw#'5KmM4"x@(iʞQ]֪i C(V]wf]0\Q FtA-BcEPjk+q~T:oFFUZR10^hkqEh:ɂqƇW8XYCQ^l-Um=!)YY4VHKe 2GV'#czbLv;kF^G5u^)'jW+QlC(#A(cXvNQJ2QH,om(`fچ(ub҆6mc ڃABS<9!RYxՔj*MZHcUCm#C&0"ZWlB28c0 "* T  ޵-7n$_azc XO홝wއjJ, H~hK"d*dK̄` R"/?x+` ԳP)QůWKҦK@=1Rdșyo/3H#V+Sh)Luĕ[nknD)5k()P5dJJ.=uKn X {Znx~x7a וM;H rRؙԇE " bN+aDYXܱ ޤ'|IɅgqRK.E!,nٛۇyla9{?5,'?\sR_V Cpwxg4,6ԠnPa~ιt|:p &hrF y X&PRo%|TV;fy kc(p( %9ʣrF.o2r`%}$5i : LA y7%++c&j&J`t?+nUҐfM agʣ dGsA|4taV+/bSWfn H#Sׁ$d j ( RQ3LU\"i1g0z5%%w|PZu՝]q(M|2&YDWmnA>1rfgtzΰRF鐶;=sۥ]Aެi֡u 8gșúOi0ס@C'/ܩW5t/~ ftw56 | ogU=V\,VxߦM9ĸ}>_WTw>k#jEi4R4s׵kstG$nPX5­:5Lө{N `ia@$F4[ݠS5rAC{(Cj~TL cg}>>?0,j硧5m6.]k?$7m鉕7=ZPW*ڐn.O'{_nfƅ7fs&Bd16Рh٫=a3$On aLSl =X] WW?;l(|/Y =9>ck,f R8EG#M3dR:.IZ/ E:DgTH&1hHԯ- d ۮ{_9Ei "yooI|9Յ3Z6ˏf~y{c/WL#o.W~uЉjcݻ##ByY,Uy?c_*o_=ֽ-S''8&f:J_nuO ˅q[~xVR!+^UA w񮩆Wn vyߘ&[_=  `1~9|dfeJ*"LjT&R6j!pd{XHQ,V'kֿNsEh -50LA-@^k5\Q>HlE6wuHCJ v+vvHR|HD-1sP .-Ґ(N1mH "&J-ҥ")C<ܕu)  ;JKKAiɵU!?o}z#$-q,@ΒZ (dXm׺6 t9Zju‡w].ye{HLON2DzI"(|TI9rוS'( 9ƝvHF %pprbd)pc H?}ZC{5K ].~[\eu9$W}0/*ux3}HCO\?fs1<&sF+b:A yM (~NjLN %c>2l/G{X^uWOg .Tkf~9Z8ΥQ YՉkѰw8.Xvo`w\`aԁ-PraԵ۞3YYpoCo|!<aȘ~Hc"Х#G‘UHuߪlMe#S< EH0vyVOu(Ytta+$UDh>=fգr}uШ󚔛}ϤܺrK'e:3mI= qOC/^:VPvB< /3ĄP~h{UCd~ɀK K~㩫Imq=zS1y\YNVWup ފG͠)~rNioff0?RRa.M¯\߾ ŏUiܔ/ kF,yo,ALr0K_ܲ9)Nt~'v՝̵s*Fńa)$JL`L`B:8RE5ciI2. Tk:]F@7LJDt3HG5I3rx@|IAtPi6圈S6E0[. R3nA8֭}ZD)glvб_8}EׁjN`#ǯĔR""|=4hUpGT{E;ʵUEH},oZqx˷;xDׇ&nxYCOb/5X. En1:ZZpsg!c.wekJ힬J[]T^j( S>bQuGuZ-F_5*D:xܲc=i'h''RqN~x4^YQ)9/5چiħ~܊h8ӵeՃa523=_7d`䙜H8A٘:CV "tY,1_<;>t~9V]=w˱K~^n1 ]T(0<-1$a# p|W{+bB2vAń⏺?]6z II=cSZim-[2L =/lQx/G ( c )DNՐO}Ե3jȆs`԰ȄExFzXÎ3miGX)la!&2 u#Ȑ:ʓNb3 F66а F{eH`kTᄤ64Qr> 呼;9P`ƈ8npAQí<^ZxP-90#F2FX"a iN8N ԧj-L`rӞXpw6n((cAđNÆ( `aţw"csa"ԁ8!4UӞ氆nΪ `/D\ӌ6戋xMӒδIMV+p@Rד/1,~w˴s65וOC3*Ӏ}Xuby6cҩkl$OiTce4z~6Lˇyu+>{SQ|޺zƮòjYޓM4Ǧp\ƶwcNr1hc:N H1ͻn}XDl[VxLDو" @  PFS*YB̑Z4u[>@NUtS;'\_aM ?"i(sf /Yׯ-nLu Un?4emy#6_{׼U]{,ž|[6f CTюizc,k4d R{ ;zړqJ{ ;RAA)^{ .7L"  Bf;;Ɛ{Z|?]ܙſ BBljݶ%ߩwu H2:e:䱢:+K $;g<-pr)K:7H~?}<fo8' ytHN1hF7D-#(Ry1#0巏P*:j9I4aL޽yV/WM bN#x[Wm)|uTC]v)ooiCu~ _fYU˃b0\EQ16JE-nm"ܶԔiza3Z!E2Pb<ԏmAmi؋P$[gWEvtbW+mVixRV^'ȑ-'H3BQ.2bS Pɀ!Vb ;Mk1^぀ȴҀ@ f_= 1~۰|7ſ7a5[ &}%?Xm*1!/i-D_ '^`&0 ul/{c %09 0#%>"5VVbeqAѣn/N}r7y9HZ J.o򼋲M$+6ԛ0gʟY]87K&:7r`.W<f헃Ј =jEH bV`",'(R9X$9L@"3*PT''{][s6+.NKx^yJ*ݧJj+eG;[5}AI %HN/E|sKF؂Xh(he'Y-hy:u#lXK@ 9cZ{5S ų#yA@XKTR 7RO (Niv9DB(*{-lfL`8WǗuGJ?0k~qZx,[=vc[̷+[5X|(nFX41 c'#DZ3KnPx~sU|?E¸BN}%ofzh?:ޤz.]VI7 Lj쵯ڏW{gj_3J{잆 @P2aeU1'VaVe h\$_'my5|am Uɳw9yh)z,Kv35oV %# T"tB˛%Ř;l[>mj[k#@z*B "a%W*B 8nbΆ,Dx՝ס% >&sYyv] гa r ?Rq|26&p9faBF! 1Y'{MK1IkVk%Đ<ѫe2[|m9z']fxl.j&GV{7kY(A~}+~ʠ_zxF/~]~bB??\Oa Oߙ(Sw7Cq mY#2$KD['?a{us@+E.E 6~X S=6{E11B8G$r[dm1~ҥ  /\ MؑrҠu+k{@@j>=pP˻;-OyzPO e8& )iv.ԙ:vQ9_igUb^|0n0)I*c#g#:\^?}Di\dic0M DO.sa{ ? >'/qrh}//S' HK{PK [hgӽxXTΌb 9s o=HL]?"jx RD?b ^[À P׭eVp#&0_%'IqzMrz%0z$-qF 5۲S䀎/a¸|.k*yŹtVKF`%R/ {@b㷑R.Rc.:u;'[T@=M `HzHBKi S#BNR \#ۋ ,XDJn5yyOS 0 Suy܅u%kځtT-Slݛ=.'~bVfnO֌AKz*z= 44iց=:4{L\MO19iۗ$ᗸq7N0)x?~vBW3*=ҐJ|}VHTɏڽ/Y|QZdU,_G_J0's3EQIA,o]W+o~?*lF2jOZQq+є9*YcGu"h\xh$c螣Nb_VXtys]<#yk<4/irfa*wu@՟dv [{ʺ8DZ{l0@QQnGKG.t|^6=yŋšckHP @HGʁRY蝠@WsC;5A~/2ZJЯ"R{ nʺ;2eHHqV#*#3D,Zh `#=8&y9ȔA|'eZ`BU۫\'(ךLӭ" MELd5l2~͍-fvc@ᘝDiN`b3l Ia 4Xy(l4p%l T]QFS ~p:Ȍ)bi⼾rmoOXỉᄧ ; ik\BX;(9RV%&1H؎VWe$Y7$BZ%qd`zp Tݡ݆~*"Gee"Ab Nu5H.-ݷ-@ .3~]q,YO2kvvxP?gRe17󞌼0A^V7rNGWkˮ{'RǻI"4J]NH:% L>opHsU#Eg ˱s]2Un;g{ ŅWdݛ&vQXԴbżc˾y| p"ʍQCIraVƙnhb>b)b;<~<+ ^"lGLAK͕EM/Sc:;̥(=eaZ }ȃzchgO =/Ĝg2,oD߄&&f'(C,R0&aA$`@k%A;״u&YsWOhL2I #4ꌒ1ԟٟ%#}WU,0B63zq>:sKK~) ]_Um'6ξ7嵣+گ=>U_![l8˭eӽxXcQ|RݳI@҃Ğ AR-fubҋA;MqQ(Ijc@rYs g`*&LQhP8Ari)zs5 R_.IBD&V.7"xbo.'4'򗻎ƌԓ\l4`E38N՞e%!O-wvTOQ&2$}R]!^D-S5mJ%uŦ㼹ll$#&rژ _PkpJ2#`*`Da٣]=/ѕV˻%@"')/3 eKROT08G3)|C&*]H 20| vK sB4`9ڛP@q)qeaHȼwbk=wgyW9󪹃16̥0 p I(ӿikv( rhMJdC͝!~[{Vc|bV.L^z+'nFEjW:ir_' ,ɀ5F?NPW(;$| ȇYȉT3'XUe+o1CcɍJ ]5oN.RS>O@&c|w>\9 =9^zb@qRn.M c9~ s@Խ?~[_ª1^ =tTr Xd&}JZ37aJ8)7:w响DK-TNsUgZ8"H~ ` _nD?e%);bICΐs$j~|e8K@Djtص>9鯁]%_駛XFQMyw^F O)Bk]1꘯"-"BR{n!,OЋ==} :{|ݚM?HTFt GdJAH]ƔsX$jhoMŕ-[oℒ`N h]1]਩6: \ aoW#0gL3 z}se >`hGScZ[ؚ,́F#nkqՌ#ng.{iZrTQU.GUUQ*w,E*ډ ņJz1icl{-/U0k# '6F zjl(L_Gz/wW\/O"R2τ,ޣ W`|Qi}?Gle8R w}w>SoĢվU]6okƿo신$Cf70pA?YǻڭV@r07A~UEجY w ob8:>f|{A!)@.TWf&"l1`$ 5kh(_p_zWԞL  ݋HH[6X) 5F|}$Rsc% ݄dmuLMkm,_-ׅ B-D299qppc 5"VXY EWΉjo@z#5J:Hmj4ch/nS?=s*pW:8="4[ϗ:h<1w]B~uq#;Cixo=u=pκNCY.N,;^'Noc1$/x]yHQNVkxobk}&a_O3S!fOӐ^!E>TSU]@$3Ǔr%8d+8<`S.ݽF z HhMѴV'=t`F[c9hZa_+td@g/&5o; sEXܡ4='N8tvw=JaI`7>w z )tDTP@}чj+61锫;0^T,jvTBʹPk(]kv1"jfV߉LN;mÿTNy;Z}>13߃\ًzCLTV, 3@Yt4IƲSMƪ^+XtөRw?|AQ5k|0\coZأ c}Bh~U-aFE˃Z!е6D*^|7^w HR#og(e)uoyfޚ&3y<,O韧~g3\f|H %,^9x7BX#_b,栕O"%&n.䒑&"HVppWa,mt vm|{*NqBQA8P4EoO%;W~2%A; Zi!hE)$\.A閥 Ȕ Ѧ, \ǴpXIt+a9*2m . k#PiXKҜ@}ff XD5ˍAWf M5(;!hʯD&՚5^hټy@o:(%:/noJ5UO=>6m {՘9TFv{^v >}f) ֬pE* 6(R/xqAzuh;| ޖ3JiPw1$SEPlt`VGŝZ aZzɻ`n!2a`s lS0vPCZ~AGبr5u(5@EңYC01+e+ӃBwH.8Z):RCatT<3o͕QRZ Qht:vBѤ'')ǒ\B;S}7⠸oI_^^Ej{si0UGձ_igwWA i<LJ [Qȿ|xߑ]X{ۂ;؟? -O,_?p~Uƺ]O?1{wf_5\0iKy)BkOÛA hnG;te>B&6 %) |Jqؗ{Ctskkc}-gd'iO7ɎjDw1WD:6k=H>C״@6Rw+ D%F܊hnK{ ZE}Pz"1|:*LV}dUYv{dgy?DӃk,z :=y4ɼt&.ocT(q,UO32g45aƂQ8rp0k+D,ˆc;=9/0_l!EpV"QRJNӊh>zvII> .Ք(˚5z/ p`4ogB0W@t^Fmd?mnHH[z0Ȯu ŏM8HFu^qS852D;$ PQGwD3e8!'gap Yrvjq,N9D->Av'Lb lMh5![+DC w[Q:;tz!%_{\0Ni_< xĒ :ǁTh ~hCk=ao~`,l/j#p#jގB$:LIB" 1FᢵVuFEh ߎhBMȂ\n(RaV;ٔgƌ17w߫0]FU{)*8z]X쌵`r{jV ~cf&I<*b5&} _5ؖLrq_1zl̮}kח8er=>|͎Ͽ?{xO7;OwV"՟;\iQWa(Oq2!"e10;]YsG+l6Y=8$1Z{uRūfuh\D})A4*O/DèfLk,ٹ|]缷<ǡ%7gH7a1`֥?{R>vp;_7Mɒ7Ѕ_=|I-ȷй& ~ak59?_WMP"h2S[Y6*l;ŎcC*Ox *])[zs#F|@b+p{  '9?M!U2r3xfse(uf{s4NenIS7Ĵ%QnaF7<אSos%{tw{*G@w{_ ЕbJjZV0)G?&ݯwHUp({kW8tն+ Ut4ӣ_|ji{Fk@kp-yG)]|)Q@)"L+1(Q Z܅(# l )uFM4A=]izеyEoBʏx܄Njg7!U2FJIPi ( s3 ͇) gɟ td:qdWM3n>}0c '}s7E8.>biB~fGp62zٕOXP7x~ B.?''SZP{SRZ#)g5!8R׫MvH&9vA%֩[w4*MN՝he|,Z{iU97M,U+<]?Ö핤^*ʠ3vWR=Ѫ YO6dU)8F#rqe"M'h8=S, $B)˨I\cJJ=˩_S#KX"5ni%Kk+G J:S%DSg%]kZ='MVɱUIq{ 0Dq n D2#.]mnxZ` }*w\Լ,)5LM~BF[c)W.JD(֞B֘R%IfBG)PnQ+&pIWR|:cB5.\w?] 3Ir~zzZg"\l͌D%~]v.e~mU=qkb43ѧF6Fu1 *IHZ콑k.휄k1 AUѥCf1>Zs ~ܵ"ZO+bӎЊT\tG 3Fq9 C(.ƨfH7F88 G 4CKR 2%qtN8Eآ5R.1p/@hs! r$cQP\@*,%&8A^防Zh$bQi0鬅F)PZZ0멁3܊ |.qH%𞞔V1M3\/r_Z:?;bcI=ջSq4d"U>Hl [vRL|^(fpiG)9# L †뫓tLkly댑QbX4n~mjL,= PV)"Hͣs뽶AI` n #)"6*/{Av<ơ#g@yV) D`EcˆsZiCE"*1R,m k$9KkEK8D)+M2,z>9\ ď_7ߟ`%,DW0x \2D٩s.Eo'8rߺߨS:!ܼ+fVIרoVtLɛ\`&Id0wp4uK̏ȕSN$Sb~2d7}wM )^ØBrESf[@ dh%Wrvx0ǖ`dz`rGt*; |NqVA2f]nA`q%S ^ yEVY :J%|m=1jVq_n4-S2.83F30.0A"jf"d`OS v{FT4xJ@ 9 A×4+ߴ撂Zo)t>T8;X LhͰzYm2y nN[`iB0jj4(h[+s֨ԩvi֐LNE)>X#)9k#qPUQx'pS'X4XAXHp_ 7i!s#jP7i0ٷ5 %匚H4Pc84xBt0Rm!Q:< d,tٜOxR7RJm觸0޼ikLc(&wW7b3(7 PB)PmWr2i6I -o~H(98YDlrXmjzC:I9=BLQfLE n  ʼnt)}.&k/aZzF-Xɕ3Wu\7>$斔|Jjqmw_ :T9OҚDQ 2/k,N۴8{I^죫1V rwj:;;*զ *Ի_|Yb"4i._ٖSJ( %0[d֬98r*j,ݘop# y{jk;f| o#f}uX+>KOˌr6Ipw;Ɋ}],Ue妅Qv6ZX'Ҋe7jjh ]竢]9R[;2v+oa:_>'}sLZA%k!b33 ¯N&e/Ա%f;*$@IiL盡@(smw+ʃ%+($ P%xBݡ.pJ/+ml${㬤k*rF U([ t+l"~j޻K{ WiD ֪;,ͣ~g%lxW@*ulj|2Jt[W]-+iύyJ n%er-e7p o#IRTYTDahϲD =!R}EUGe}+/& Ķkm"К1?Ʋ%Aw-Pݖ:FmLEc86!mYc< 1P^YЮ,1*~B]1ތ:E}yw!D7kL`=?C%F1%D)lLb՚]E"ZzklTRʫ>8dv&K'7Lo7F.зAL Ҵ:9.mF[6Bk|[)\:)r8LFT{XZT,g.I1/BJsR2yNɱ*=T;6cRb{>x&Qѝ|pw?\\x~8316ep;fw.? 't8nק&=c\MP|اEwoᘏt'HDiv`c1팤Q \p QZD ,--斕 @u `Βޫ#BU ֞6Y*Iˁ6Հ >Ve2\ N)X! +(\K5P5Aq"JEv@>'~ &)K]}Cv7%|%fNu1Lt@V/yDBt]Z>t1x!L5 ,c);=̳TpHPeKbAG-"p*exi_eUrݺN[S1I%9l \!&v,^Mm R>5t{q5GXX+{E^v<cYB8`d`p颶a01DSLB@:t%?$DQv[_|k~,]j#̶;vMz1>9k a2\҅eC0cn7YSIe둕9xG8ƓKm3-m UX%Z[_$ƭ/^4LϠrxl~QUS MYl&ȉxiRؙo-,6CI:R+tYƲDkGKr „"rp# Ta zD`VL)9 rN:II[,v,wjk5V닚4VC^^>|hy>t&:9J$98arz䕮'g/j/' 3aBy=ALA,>W|qJ\Й[PV2x|21';#Т\i_Ebz#ܱEf^lJ>2p.YI8؆US։%4,0Y:WL6CsFJSM i<˖>+I_Y7ngʊF Y%,ި2]6O)OB9rY(7 yv܄K|4$QH)gHy›CwG-B-B-B->2.xt`{s=<&AS+SJP:-5MA{/k3֚+l$FЖFzN]$l4)QB;H:ݦ`ژ8\;LάX͙YJeLb- E Ix1Cq@څXCR[ӆm2bq9 ,/-q %IJf,ܶ>7#M5/{eV]kpoq|wEُǧ!#Pu%/_^]NPvY]]m"jwopsvk V>|_`G 8\wrC%50J6XL!R3'ΏW]+qqkYyh;ίc*7 =Xꔉj<Yj)쯥:K̠g12)}NBSLB(%b>x46+]IJ"fOĬ5uLŭ$Ӂl1z+ڤ Zxr ,@03UZ4d9!aQQ!RnJK,kӦOAtx %/f2`9:VDK%p3jAZ SF`ǰݥt%F^ MIKQjK{t؝dw|rsnG,TË_zs?6g?@sZ8BH#.Q1*LMzew[|hT(ɂPzXCඛ|ãpKeT0Nd7R T>9 妼㳃tz~ E-s8W.2AqRb=VhYR i rnoz$JƍYy-f٧RLZz Fx-z,vvl5mj9kyqPsĕ.8oE%BWm~Zcf*U }R$` \Ty(ѤWu*npP3)3UၿC&\1$eJ<J@Fc[~r6`-DU(+yմC~]&ä`wY_ZUb1bU-)ށi褹%=,h8>V"i ѱ%rE횭1=,8Y, h$U[[dp֨$\)(]dG1oƓ(wbMqZvW  *:Ep#_iSH 辪1)*5"FWj,\$,h X\2M)1rߞ^%a'ԄRbˢ(5,HPKIh]UA*h/"*3hV0Ok Tv+ #!v|͖֝IGr0׳8r1Y4jV6z/;T4%󱡯A틆H$T=P>{#mIԧ".$=x!TqҎLYb?f(@`e(VdZ6רȎ-GC DbHN%*<NTItriNd`aH{R7JJ*x_jZ*~xОn4{"~x?+8ooӬ0777냟K,ͷRrЎ*"|rK%JOyM|}1@FߝC9> 8pu 7CM%1{ߕ{fUX?&*4&^&Dlϖ2%-Fw_0)ܔri<;}ΰ>],z.)Y=}]峚 5hJ{1zX7EYw1ڪ/6#~LoWVdȻիqj,[+!Pkgg.e`8A??lB%GTg)y\z瓹=*Ψ\n Z-\VŴR Te,uUsύX ɋQ.KraWSQNF#ONP5贎$r%ţc$1b^>B"TC)sS X/ꁿ\ 6]G7=ƒ5Qa/K/U;LYؾq6{G$m?-;k]osj݆wюgK]W5J;FAKCLhEzfw[7eR;wP3k3X1?x2eRݱA5^R[ߍwRFV>FִW%`v_F^\^25' 35غ-&3wy& ;kуy#䜚=tkRqkޗ@J-gsm:A{;r wN?v@NNw?y>kmI*Y죮d~Mzs}j ڝo~2R; Cc40+8Tl?$Ppc{hq{57齇W?G<3[ mVݹ۹j6~*n6$dQun4)ʯO!9|ѵ iU5ᒨz(NYyt-S$3䥮isyQLz/iFaiZ޾2yqlG28!ug{F2lavKb]>;yqٙ2epVWsUrw)ۇS Ww{O,= 9nV=;\ߍQ=i;w w%fhv8['UC3j Wr(%ͣkk9==LaK$|NU׼%JZrgɁO'#UB{;rka(y+J+skq(%o `sB(y,mgɳwq>+8=+v'᳂ y<:+8JNN 6ӓ#@۫Vw|诠 JCj!%ii/k%&wʫsKʄr9Q ATƲrtz{E'[z-z,DUNfS #u0rT87 Z %8/c}pAG%sƲ4Nu&BsɌN0/PA8'ٻ涍%WX|;{_\M69vq^6^ ")$A .l9X6AO=}ryT،pWL^OXfOQ5< oA/wf~1ۋ/"pbR})#ECOtQd݊y mAj`K?S*Fy3Aa&**ISE0gJ_!9h)q԰TI,5!KYp(I,5URk5'q) @m?_a<ɲa2ecd&g"YG8R$h@&pX>VTUA:`k$+ #dG +>!)1 6+JiH#R]!(/Yqfu˽92`xSgFĉH9 a-OR0 nP*)UP-R 9%۫YrCb+7Y߭歟״Tpv~K5B1QϿ'߿|IjgvM^A;E<?|pb-';(˃bzP5LLuG#E$^'H`)2[t cy4bl;QX '>Ơ+ iiA%$ݢΆB V'4΂8V= 1Q~N99 BP0k(c _MH-$72VcA3%P[35q=vk~9w|^]\4jVI] ԄLNR6 Xަ2nt6=ix+XMx<1*H ^g25 !S0ڄ. )htMR P) d[`=iN,=FO0&uj4 v dQk.=ƛ_L,?Ʒm}45OQu v2P9bt&O 0lt=5I}JtSL{ЩxB#<|T9 ƛL󽂴u3yHwA7hEn<tZ6bHNj bFDuL Q<խ Jc/(|ն*-2 igUpE#0pKwIDN'!r(@stBvJS{cE{Ibq-cBR.#r~Az"7-9@eSsFݽpp/u7w)heW:xrR(zG5Fu~yĬk p+廥uNEwoF{DrYvyucze$F>&ؘIdze+W_NVTyaD@LprL~,_Oo$g1}DWHf|XU8rP'J DUؙhpv}jAS}w %B6 T\?%W3Gt % O6@$<:f%`*,Z*f&fxK7K\O7Q#M$P4 s%S;Or{-R4N: IԪZԉRR'7p=/Ij7Fā>~ (ꪩAUPUs.Sml HgPDP{9/JQ!xn# wJ4Ymo*!y0xH'rV kds;.EEm~^7~?x-޾8:}yH'7P~g.|sg7 Ë!h'[0":!;n&;`;ׄb؁KYD?{VC ?Q}0^z0_Il_nuZӷ[|afmдKt FH%>Kˋ>ʳB!*o Hk-bRM%ܑ5*j1BO(S 2'^0%x|9*%ztvUg FGḳ9\2SOJ.\x2+ 4Q'k^Ǯ03m,ֆC=YX8^"J']َ0.Of+%;uN % Gm}n?men |ײu) *0#\([[;'jܨrϏͩ7B)\J|s!_1#-7ίggxR8p8!` y!!B)P*-Ti sA,5߇-A-k^=(mDc EZ/Ɉ0P,(jUJcH%dxüT:C5!ikLx挒h:*;Uݺ}pP }Vt@h,w@i iw#`; z6=**Ī84&x)RJuֺk<}u*qI0 NL2A8+PUI L+*A`ȑ|*-%XpS`rN#GEi͵ܮ-i`%L41V)$ 8UraʙM,(apzCd<2\Pƀ+~б5Lpb].+~ogE*8%^3n:k* yTwT&!: X-սacɠEg@]%YB*WqSzN,pN^]ʘS`4{#ُpyFB 5T`yq{묘X #ˤ;vg]^2-T17abG*!QIx S &(&EG@LHl9CPFLzYD6b_$f wK54 $N⍜9ueB'FXI.2Bt űXb~`]<0ռZIRApEHiDݳtPPtzԲԺK{"+;*+ |%~%yL2JDmdP~/= 8]\bD>_O淟q킼qg\d򚲶杪Ny.__Zz&~nFxa?EťpkXM(W/bh1P9nf$I}?5[E>OqJ;OT?B̊UIxl]..? LY+*Wdp'44b e&Nfibge&ㆽy9ڹ3( 1h6 P} % baa}]Vd5  Bf4Gc!TvVu:59Pog'~vjnE[5Xw|'_XuH)%Z yR.xDZIjD֤8%:{ Tx̹3nя_dWi:`F G-voYi_=i:ײW}]Q͌ _ 68' KE4\!_uX/AGp,=Mca`rh|20 gik!<5֎QH@, XNU 8:)|IC%7>ePBDLAiQ8RB`8 zo5auaq[cM[j6ݙąESemIrfՠkN7I 38~iLs&tLkuKsYS:ʚpV3#vǏE3a7WGvee$?]I?SU(j$xR:nuD:.A33/(Nh p> KϧW9@#gVF{w T(\2Wt{W+t׀3mM^t9]%i@?S!2k/1gI9E/qINϢ.ZJ-Qx@ 8M#H!}k{ԿxY#)CJ/NQpIk|јv~ 0T/U.{bA%\xwg8RJ5R)b160|9DO)'Q g|J%oOJa-ռh v>ofŸ :_| M8aac%#z[(;Zf $˭okkY~Lk?@¿1-8h9IeÒoݾaWkr[GY}Qq`,"=q\,919[SSr!\x4 jek" ƣ۴jWɮ7 ߪz$sb?-9[_ݶxq/_"O-=_]/p}J)g0VWp<1dB`CXlfhJq5cUsF8WXfN^NŒN,)g!VS6]X}O8؟.?T kPX'&dʃح{D#X kx$f" g:R05GrKCbX B-tV'sl`H+$ 4ZUd'F!y3̴ 3g "&tFw5Kݗ+C2\-#P;НکwjT[) _bTY0GB&Y2T"v` ߀eTG61!}8JQyGU[n⺀8:fX ̶c>U%]9Ȭ|`o凓ga}csq>p|?hϠ|P#EZ_2H7Oo|ᬇsH7dEQ5E)RtzY;_vsmO*BGƧ}#,ky)=b-yA'RrdN i+-օyRT[ }.ND){AY=٩)4 N ̕Á=(m%S 0Jc%ªljZʚBj6x<@QtdZKtEҚzBLJ&%*X$1T@ZY͸R2l )r\akXk_+L띷TSTڽ_MUNtJe }[!d`ݦA2gj?7I;qtx1HѮ߾산h iB~šr5x~n̎KƉ6$ޤ0uCva_ ͤ(*riIB"DH)bұv㥻[S rD;h 0'M5hvkCB"Z[4w Nsa +A)jii+yR3SđZs'c?Ԧ1_b Ù}М23횎$JgZjBJZ% "&Y3efWpHIϭ_XpΓ/N՗*sl==Qwr"vVQZD+@!31;}PLz*Ty^3i`0d!c0O, XJx ;d~z@ nJ+Q˦{G0,<{8u<٦DZڟgf)kwd!3K"\ɝx3;)ja=_/zowד`tyc\Snxeg>_-g8tE㻟GyME" Xa|d!ۅY= 3)0H9mr!Gx2'7tɜo?7m}Tkw Qo1"M-C;F 9kǜ)1i8s_j}qN[<+~z87muCF[i8}u$kt$u$(MINVaH WZ(}5p0õlw aul`w Ɗa4ҭJA6U?U.{uN,屩DP?ӚJ %V iQ\ZD;E/;Ψ`jR?P-FT8r,qDB6*Ea0>$m‡Arwc>^X϶SJN=SN$v2ED"J[}{{L`-#kNDW-3#"Kqb'KLJn+ s!>YnVRKWp>I\@3ɤSMTS| ӫeRo"U~..31 `U\p$x挒xx2hV)`|9S?Qftrfï.,Wr!0[MS/̩׏j׉0"ж!<yR4D28Mvsep*BqN잟, dA6tciُ(/(ԽA@>r/R;r7 6OYOVary5~qogq=~_N7.~-,aRKEc FHrɸ&`uI r# -qT&0G 4=TT`;mBrC4kw1Ǔ18)&q73T:H:7]hLgp;_vLUׁjPJ(XԑY!؄yFp/$Ӡky  cfEDPȟD3@)p3jHK"O:;F `#J*cƖ1DOAU5Ab!VY\B+c~}yw^&*ûF͌}.N"YfKAZc9RZ%XuM;*'ֶ߫V-"8 [RhpfRA޳.;Ä*Y1gk]ei'æ9:)F Ld jq U+TG!$uDzvxT :U&?djT>Ct&0bh*dJ)Q| ऍ[]og9*Fʹ.μ 3F "䜎IH"wDAT;"Um(l!/lsz8!%_5u\51#7|*D [p¡"]#]U޹i9˺-mP3N:GNQhuݿ%&XZ9^G )H{N nnPsuODšfRe|J(6b\i wۼ胈<PcY4hX* '2ۨtT6ԤDB'q=w&TRˍ8d+L2^mYrחdOnXKWwߏR Xo7_`4fY~@xI'C矾r ?ܭv2.0ۓۯƠ=/&7ffu~tRaδ<O)0I~1on6NHTq&1jVn=9(BHM^'ô}B&h LZ%33)ldӠ"_KLtSQ˓;A>6M>3G)mVP@*PHnQk(tHQJXs u[a[ zH@D@1^.\on_2GmX-JbX`a{c ~ϐ3,g=~|R[ݬH)d#s~p7J [;1?˭'zëFOʸ$xMU^s}g 0;kt|⾪pT`Wo*Y;_JII{:RxzWGOϵ#zT#:&K ˫BQ uf$DDn;^Sqa:DaWI`|NԸ )zZn&Bs1Dх{e67c)h|.-VH b]TZ// փ_Sj0S "Fv3C;yT)(zmPGĥ5XI ղ8l"i^tY[d1Q: xAhe""I1a9M#B+mRD5,uҋ&Tsot8"\#aܤBgYWQ~ZPީNJ qREpynLI2PEA?f-XA!X kx$f"bi\ϮWx@1kȃ>>~)Jq(lyk(?RO ktjM7G5G}~ m Ղwyv|uD\!ZJЦ;Ő:.*ZX9#'X+gnk:,`YCbxFlj6#+(>,RS[z-\N0p-.V*,.&[5{T7q#+,r~7ٽ`A~H[,9zpUSMzP")Q`&(vSeH(5T2D4g:%;,h8p۰M㤑YTIkl$ӪKQc5dF؆HQ,N"'Wb5Ts?wôD/@D3:yF>ƚ7 m'N[uvnl< [m`؀gp<7% }2bYDT٩B)'aJ S) :nU9}7Y.>ŕϴ yE(ϐE/<#K#1E~ %˼:/lFK؞VEF6IإV Z8\ 5m<^v~<@-0|wOZ5̾)%U˚GE#"_YQ9;4HK%UhZ1/TJb "+0nWoc:)n^%xCb&0RMI% uUS|~, K v~^\ @*^7B0Npͨ^ sh _t2B!~>VDԮ^B!R V LOK;T "Б᷐t))cKf`cv=μ^7(0#{?f~<81I ?/$%%%[@SRj WA3ƙIk#\PS224X;7ɻh2$1/h:Nb=(^@02L&w0.ʃ,LbKU2wUYsQ-с7)5/T*rB%ް銝g:/Gp7Z11#DYϤ%YwhqƊ7rq퇞)bH~@!OG>\AͿyw0\Eak_*+iw˸]Rq^&A=.gq3̘xB"@*封e)$S;yrدW7>V=DM SL83Io0OdL8â2YʍIϸe‚43n֖[oMuuʒKWvdh}T5:c+Y#'_:"EO'\jQH 61L:߸--O%p`Hza<9n阃ӓ~Oo 6a:yKzLps0p7]Lp1fb'rM1v?`cc86ÐfCMyY?f٧Q^\-( 6ȶ Yƈ,K2:5<00\l͞k<'?'4^p#9%KX`Q2RLY(Ʃ݊9A%"8/Z3QXἳ;\tfa{/pRs% j,(`nDb&Ӡ?,.YUY2Uxc6a{48Būv\ tt͍ʈ֢2/(T{Ҝ7~e}VFLR Jᄉ%vMmk xW خ)0F{!p,X!Kw`}5qu |?lgu搖f{N8~+#? I&zu2{gӟc8D*x q1ȲOGTjh,nӚ+3Gg]qc?%֪i=pd Ą=R-}p~W;0po~k5L$"1iv1A"7.'G*1>pEd t.pbLDY 1l,h1Z!\m˅錽*㝱-VBF_OJ\*L6cqZXB۷2uVDz!6T^vaY# _W񩕬 媃LfmX=%H5l-o:ۏfFec4å6IebB JbdmD48) LjW>|f2>vB̫ {Rʍ>Kdgɜ `J|8y>Xx| ;S%DpwJ  V+0ސdK_.楜a^ӿ[lfE7X[7QaW #đ]\8.@uHeLhH4# rYp2Kde6SLXQGZhأezZa*) R-ˏ0h5px &ʾ|}Ժ!yO9+1 RsO9R7-$;=1b0)3ئ"\Jg.`,'e0+ʌ3pseJy_g)~<+:c^/:o|s6X%>ޒ\08a`.ƑIKKa=\F$98|}^!\6)bH#ӻ_0m%yvqUc'#eRz(Bê8Ef/u  -dB-DײhHe5IOop~Hw_ Xk]0 iݦ}h3k5ڤf8e¹4>g KM2Awk k̻7ZDvJ2`bzC_~TVbÆ1΃!)LY[{?1Q$+iySA1| rYcfZl;RGp25$8O*T6lW>jF9fa,Z$ptʸP#慟fTi@[Zƒ癔<0ƒ2wUE5\śp;0Ac}^sv`GϿp;9qFm*vpy61簯/:nMKݸSATl~g_8Q u>mH-(גqߩTR:- pf+퇓Nba}\ugr;U!)皤Xf*CNbT@S@'?yy 4e|6FH"DdtB-o#eFٞRבlyč Dhi뒮_`%-T̟ G2sgbAiZJ)ک8%Q"[LdOWG{cMjh+Dl0]s 5Ur_,5gK/Mׄ %`Hs5?bᇱ5Ll3}po91 "RNYPr8F-՗/`.0kLl,r} ċK`{Ox1|ڣAz)MaWX_G&.$p/^z[w ϫ  8>w^s\[kG8uinKzAG^ lz/H.h{9S l l_B`CՊŵ_7PEj}%unq|ѾR5}o%J9Ne`~9%^*u`\Φ6vrƼG:׼ g 8;W#f:Kw;b_<=.iE=_laN\lf\Mx+Pq.$7$8!5 S"йV : !#g٬I+P+DcTN:Î=HB)qWDq][o9+frNAٝ y`kƖ ]r=OQenw,% ǒZdbU,VkYjP)>O:d2[H\U H1e)/?x hVC5,|J??| t VCR/[)i:/8*tyP*ǥ\<:B:UQ`SyAEi/\Tpe(gQD %#Y$ԉs\@a|_ږĪ *XQLYNFrFg}n}`Y-Q7F Р20h0aD\ =s@t4"wd0\ 'GFh =wϔmW pB!X"CE!H{qZQgbF39m!jW'W0]U'p q4xA—Ѡ>2GF+d:4*"Q78 @ˀ{B*FKk'j` Z Nۙr85j8IOEJYnK6c~|;ͪxtXd&rҽJ|zrG3]Jaə&-cHZqT ?R^)jc{}@O拹 Ahb[3}bH0bNve^DEU!3߳{ %i1>.צZQ"XDZ{/8))8]ZqtBBn1N[~^q{bH:XYW @$De 6zcWyjrAr)ϵ98e8{5 @;]c߱бAN8Yh[^r<eZj&ZBH!pͤ Ϻ_X!(Ӎ9G$0֟Qf~s5c\^%kш~Esu/Z\_ui^Q]ߒ">i6 ƌg8c&r*SsWsɐr9ݬ;Y?Q4g7s|?|x|dEmpvBϡ; `qq0 <+"@#PKKs=1Scpҁ3m¸+ NmW+yLajc= R 4KErYBR\w~Z A%fhO+"!hdT1"'i s\#Ĵ`"+_jI)F f$6L Bx+":G%&f;pqo|"H.}XzͰ k_ z`7fӓ.Gv13pSSg  yB)j=LxL>y*Pa֨]$L`^?Ơ%h`/[ Vw@^Et-N ,X$0\?(rGE+ǵ2QYm"_Lpdԍap pxV3"F#Rt8ʨ!Pa՛`FRHu5P‡)Ur</:-c爩VA;1Ff$3\ c=z):9:H8gЋGwH)npKkҝ}Ը[fH!EWlY=4-F@HH zhv: 8JJB@3 `'pjO8WJ:j=r 'V{j hƩz޲F9O{i ah+ K$lc_.gLrTqSА䙚0cJMi$QRYkp þp4&NgVe2!:5eJjZ{ [APR]r*s9BjHb M8#u(L2㤎qCPh]* -kV\rќE{,=d JE@u+H]HS):0ZB(XҨ1緄&"=Ei<Aޥb\ ֹBv~BFPzQ,4Z SOe)e!A moI(QO7J+2 @SnhUUYLgBו;KE %mﲢ*Ymz_ޫ̈́)M+J5MOgv+Ղ@<n! 5xj3HCZhQ GV'y"j=I= qaS_$\ OiY!8lXz"L]Fuo7>CjujyQ ՗;D7SutU>Pi"ѲY؀r!Q :ɲ7-Y%q{3vjq8Zcz/fJO%=ſ`@LJR܄HC0LL-Mp%B0TY&8+6rP@&Da阧4CdlgDHBmeed yp,iT(ň6YHZ;*|޽ fGG\ eW˯+0BjCbQ$ ZMU; ӝΪO'WhEfF%PnR8gTz"*7ƭ5f-2n˚=H4.:7J2B;ʘ!yN"[$5hp]pZڜIꃱktkS4z]L:&T+aصܨBP[?WFcz:t_CK$_.9!?CjFp~_)O'Yh MEinOeVX5mcHi?~ ƧOlƅއ>:bbd 1 xӫC`+He+ N۵9a::4f-PƃYCB$0plQ*Ui]Ѕ${NV?#9n\@ʸ\yg#\ Ug;`c^T>ws;ݔ^w6>}Tg]鴣SFcPq#*I\)dGQ|/1=ŻX&|F^‹iei;/|JcUH\>Q{łX]kEROМsa_R&<\-airmz}Uvi::_ =Z2G@8xitF"T,Lj(Y.LWeg7s|0CɾnNONH:SU:[2` _VWW0 R6xKu~EKfLl52ok\"?}FkaWqgG` ):–~Y6o؏au Y<>+amwP0*Vnyѻ^7zРJ*1َ>֐WiyBAv/#'(oY7~r͹>⮕$͇(΂n'ׅK {.$VT%Q/(s-:nm$[\ {ktÎ>"HERRf1Y%sBf U!sq1qOOX6ŠZ`xIFKqk?>}Y˧>%GGRG{˭T<CYmbҴICduz'd8&u+q.%vK"Zgo4HJR8$fCKv9ӍF>lL? RA 2wpMhq/vz9zNoid*8NdOCѦr"V3߻zm=y ֫@jAj}j>yQ}y;9'_>}FiO=е N{-Wn{{ .jcSܘp~5w*&\cmIp i^5eGz̑903孯j;W'f=o,68 T*8o7 [Ϟg>o KTfch>ev6< }I!'~vMWRiRصZ^H5XibTot 'mK3D!#B$41NZP㥏^x>N2aq7ns{wɺ-bׯ*Td:jQRwmYH1` W%h.K"2A >c>+kFT01*:Y$7sj 3f.h5^r%c_6$pqA) &p7R 2JŹ3E_ENK4}\|xB.}kUF7­}06̰ 3l*l[ڜ PTSJ1OJഖY]D`IH*ڔͱk8Oobɦ] ClŰ*)G ^T b$ ` 2LȔQ$渢j$NH;%7<̆q?*͍&dXi$/]$+?b6՜íش:CuzSXnpgA8%.Ap5riTqfj}M|yҾĨU]7sqJ֮E0Jp"s"ЎGF܊()q0+b`p*p F]Ā_,qYW2)Jwe`ࣙ- >7rLJX5*qH3^4tz.;>(p/XGEE,Fϔk\ZUam1pd5 :ezԸh?dI`[&_ M)&Q$n\p6VHPԉF 1  ,NO7)yp +FC%Eecsjf^_K mN]hh8qfpJ8#VqKi$ / x6z;٩`6vKf:T#p{695=O!j.09`!G/ O֏.VwڟVR޾;mgUq;支˅}86Vn gM4TuƳcڌg;n1CI9t s S7;7 V;~*(;H$n-5A:zU>N Z4Ͼƾ=p@BS[ K!WqɓݯODxTl#fo/[ﯲa&8G-^4 ^LMrz;ZnZ4=8;pFFԖҬ_jKݝPuvo7c:=x$[;*'/9QEȕ&=>bi'|1i o3FDzjAf9})7EsZ&\h;NtauGWuU ]_\)盈j_\ HE:ڝ)rR)b5UJd=ǜBœv-ƪwG瑑qpMp!gOhBۊζĭa j։*_"!3@]JA6Ghpw;n?-=+{q"~{~ۜ#Qwݧ^0a**gD8_' 1zGӊ)c 4R2c֓`a:{7oVWKIĈӻտmT5L!/}vYB&G\: :ۇ0JA3Mw\p'4|y_$Gx) 3/ǵ,ʕs=J=siC fU)R@Ov~ZnXk 0vAw(4Ӈĉbg ㍝؏Ʋ}#_-# 4iWr4NX^}]:EqM&EiqK5JJ\uѥm& i]}iZx}5c4*{IPpN%vVFe*R *GH%,86pD0Z$+1qH'K4֠ǒd$J-%ͤI4}eJ,،Y:Uý-kz?8&ۣD)6@9Y$:Y,W_gZvpso>Cn߽o-ҹ}*EkB#ORj:jl+{(ҩǻ>֯3q Q̬2֎ BQDG EpC+0N7/QlE4c! !Ġ&Ť%`xL%G=<'L4/a ]whR/ç Ɗcm%{2lƊ %4i |rB&XJ&$V#PI?R~lD bGWK@fjNOeUjgehuQhy I vEN_"0U>BzC`BLIeQb; qD*MnbjCG(Q`O~H_1x3ќɆ5yׂS)h$W0|i1y-gkΗK[ ts!֎͚OF6Up M_NzumGaKlc:=V f3TuWeuub#׿\- b,4_v.3 +yK) ?) Ff/{xN3)N_~:C37%TOJbOZ+U/>o`k^Y)bUi6Ә8˸=5,ruto!=OaՖ7w~oRA.6ʠ뷃wE; _qޜ`7K^n~4LՋY.~Z*rsLoҶWqQQ! y&Z`S w3ûbPt qw;[:UDv݊oz,䅛6]ȭR οMALQj&4:d%PLd5F8Saj(sFWI;&}9Nk)TR|wH \P"6D6ud#{Rl@y0,&+wA{+$k\y 2 az< f"u 6m L0r8;"(&)%:8YpBy\.qB [\BˍFq ZԔ'^һ$P͔1!&@s h(ŁNy/2"=m~ۖ?8Er_@່Q; 804QNd{/_\W>ICTE}ӏe PT-@Xj'akD~"ER:} 9 ngnzYJej8MgN2TrNFJ=2ܭa㗩DNdߡgid~c3AsDCZ}&(23oLʶՖ&M 詺T8FƜ{4Ԅ4hugh2ӧBӊYNV}5DiޜuMo/_>7?.3WtS뉆]bn_$%We)UKJ9CE~ҝ U_X|U؊{p eonՆ5m^4ך7%3)oZItz;N )-CpW8[l,L0%"SI 7}o#J7o 8 xsҗ0#_/VY , `?> |8qC|}hތ}Ǘ}C>>S)#j)a;zf:I+\-}Ѩ~#M%Ղ^@W6F6dI ]r;Eyt$>Dn*hlYuZF7RɪGF͢$fxAkD n o?ڴDv{rDZaf߷~kі~Q4JgV->bf.%񽞃o :yL56qZֿb,w``DX&T|̛{wD5w oM\҆!~=ڸH)+} 6@<MzmMM%y_y|3QTm6Z\ 9D!o0d&RU=;GdYHYd_Oth%gӕlΆˆ퓁IQ8fXO)"9kuDz9=KRn[&j!Xms-M.qP#%j0ZSd-2.k@ &#5_c4o٬8׌uߝ;C25wt( NSe]~ڦ޾grOqm@<Ό<$E"7F:D9C>JV~9jɡƧ7cIB/EH}zr`:e_,-GCJEV4I=3ZN"3_TUWWWv?ڗ%c j!:PT"9; 2DyA)^ZETvw^ v>gO֤({<~RI T ,#i<<ucJ(i!w}^>hß9@:|<>CLknMH\f>Ip1k&^:|KZi;[1}JDb+ 8AE5 (\L*X(K{G 4獵JO ={mmv]fg 6{e?2 I/7#3ѲqH4H[[E{{3 * 9z\k +CC#5帶A~ |aZ5sM{r\ՏM3CzwPvz[;(9wyPYG $bL֘SZew ,"w*7Ѧ~w7͇r$[*䓝\OR$ '{L^p O> sϹB'Wn\&>WZ;;gDJw- qdkEvu3.kfFה:xd `T aRڄmƵðq|kCkr[Y?tBwbm_3{UC{W^'JY;Vi;~ͦ'4}TM3˯6;FkRo+iƊBBQբ)H FdO~A{ SQݹBO7W NϯCXjb͔B eXnj\N*Am͵&D%M'&١fRyBG'KM.y.QJ YrLԤ0% 0hndsAxJ2%ֲZ3KFRN'u J"a>+8h國AE9 ;Gg|(\*jxM"Po|nA4)cБ/DKZTpR aVia5XIBU\(Qpj'ɫ1Eh&I')L j* u]A"#Cτp}aFP59 KZZu Vpֵ₦pچY}qRzE{i_ e Qaw=y YVp`I@#RA8@:ZkcW*/p\1M|@kH.ud4NNݜpo-ON(ɗ&7G]s|w;EL둷.-AI[lssOot0#~pQcb&? xCݥåۛ#|7?ՁZ&|?i~n\j\/ '7ц3SoO.O1x}>ᢊ2%ʋ̔yjAQ)}GeKDO5A>P\3K&mMpu vxHyBi~̵6QiSWJס\Be% @E7c9u9XKzYe2a{Cj4^G?Neqd=)ّn$m&l v:@'7ݾJ׫ k\1Uh#~[(|Ų,9{uį!~j$"[2WxpHH}6'Jg%'gzfC󆯟o&7lY,\׬(1b2M@rQC2HEMgjV]p`1zYp /%vYp kpn](L˔2IXSc )ifpC=m\jS͛-ŭ h)GYW^feѲB;.F5@W[ʪï %bR PY+MtVYpg:Rj=\uMvx福A o#`ZHD b` (&^ym\%"ͬ#P+ Tc=ų+ B$WI:E2)yFA- (^#O3ɷ8}TS&S@1o$4 C)ɹqh8MH|J HN{IZ4.*PU6JZ nI$+4x0}X[a#ޜs|Ajn I;-/v$=("-d2c[̓z辏-y=:6I~2o qB *X[_I&ߧ5hC8sF+282FA'Z$.,a;цm{鯏_> kRs37|= BP6ܶ,3D렉ƙoV2M2Bd .F/cjA/c .*oGl@yNm6Yiñ2褲qq!yOVZ> ӦR/Li #lz^Q\ARk>:+M`crBU$"*$^:J[7Nh'4U ezEmz=F y'.߿Y/on4% (Wd<5L8`tHSZ2 [q=<]ة; qI]<\#[v<8IZlS\NJe(w F~z^]cG,O4XHByO#+g FB^lQ5 [)9Sz6m%z=v+hvBB^ɔ1Va3#*8M%'Fe++t$ xH=o$&\{{UN#FV&3c1x͵,6lǁ0KMSt nb)96m=GdcSo7))SGr4˪ҧzĄBy\b$Lr tCp+6U߂Rz.Mm?E4|`g.iNS7D ̧+H7zxֳ {|O=ނNwr195[7L$@\%(\JѠ뚼N͹l>q69C&f_-:p`=̈́{wȔm.6b@owV% ZKヘsB-oZP;%/:3Bkx_We3kQ2A`aƯQ5N9L5:RIz~sҊ0X<[6̨"ڔpw3|NsZ:-G'ܝ,o}JYgO.PسϸBgLoJVʼC#Q*id6;Z惜mxk./T7od &Mk")oЬS*JV5 QPrՎG\XԱQ8DjL1 _>iSt(~>Q1AɁʙG*TFYQ%,hL f9e\ĠjJ+#a%r݋P qk,U5k]@AQ3Crx %+D D-wN˘G8Ø6oW۝FNdUIY*@@ %tƁ;?yHMwYSUZb^2OTm=u4)B-̅:H@5 Pk !F y**\,wBl LH(Z9 M- U5ՐP Be!VV[Õho#FHEqԸJRBRȐj2JԪZR X\I& _Iw7RŕAXTŠ$Uxn9zl/́=sz?0쌝p?1n6qwk/oNaB٩t�9?]\_?=Oi5 ;qY5;K~M%Ն0TN.OCγc&a䳽f_MUFE ,r/Y%JB``# F$ZㆋԬ 5.ezUwxl)IK7 T uk aa/ wxI)0nçK[hrXScHѴ1H;S7M qAyk{@wEZhOd<X,5; PڡXdV`%)BC=e u)=DgOk{WKn$V}?Өǫ슣N>i]Ysɑ+x1>q((${@Ib :|$AC/L*j5/H锖hIQW22h,/Vbۆι,/\3^ ^ZK-+I-Q[^O(/2OTgM;Z DHF'MKJ#xRY_Omw? &4Oӝ,v)L;ή9F=qeB#zT1ht1 ( `zi]+; 51_ ˎ0~!O (y 0#u4ZVE͌cD*I:YPÙ_az~:d\_X 1_-}_P z^r,Iʺ3-u{` J%64N''{TpMF_{x$匯.OQR LZn$T*\FצAR5 VPV9eIal1\<\h !C2 _J$I3?v&ogE'>z~2q r5u] k ?vG1vg4{_a>=׾:[kҍ5|=Sh-.i7ːSQէ%'?BMJAW] =,@!9kAe5ͽ:mJYYxmlU/wvj*CS"R[3Ċȕf oh20KU#Zr7\.$%dJ'iq^QS[͑Vf~bfKm>tX۴ఇE@Z7R53֕kNz"s6wik9ek\z>va|M,~ 6,( ;&% EUḲP`Tr)v6:NY-ft*dvJ(+-Q 7 \G<ĉh1vX`y!KZGa: (*B Aiưfb%ovO`7vJfo?MgX aBR1ju TMJ} @%T'{ԁxup>˗4 PMZT큪큪R*A;/I+OƋjӈpZR=6DGXPGKƙV 6ks??Vp.Ufr0eʴ <''qrƣmq,?p]&twsľƄY0o- }tǨ##уs`(xnj{ ~*%5'%5!%X f.;C);C쐣2 n \..cL) &q*<]_>g%"mҥBfؙou,[/]w7q A]jCghp>VoEqy-*X28*HTANP:Ur΂_Dcbm6߇gҒn.h RaF(x~Ep JsKS79 T,IȴϢ}͉s൏^TE Z '* pԐQ εİE t+ aܙ{Xa~ܧ3r\h^Yҳ3C8 Xap6(ʨ,.  aS<Z@(Q$2).d挶',O\FbɥTI)8hw1L9dD kSa~*Oʼn-:n(G(r}c:I^u4hlfSPP03 Y!O^"ek0x` weRB젢f V9؈"0 S-} d^t֬ݹrnhmꞑ+ |8)7FC"1  2$ȅ!ѭwU5w5XU7"P>X%&S͚ XVS= eAڵ<;ٗVx ,|bM_#$:g#v0ېOgjb"ImLjw N|g(0AՄߴJ6H˿!(Ԉ ۾d?}.WɃyQz}ag ;N,!\4Bx7BX?ŏ5d&HK D;ocvd289L姈EQm #AR^;L3D;qIHΛd &@lh"{3՟l2gg.݌9_!˥򍑽#_hU<4ַ3;۞pVf+Ax-l>u4<]CeTW[ẩ LcmSfEW.YhspMKc$s/;;?{Q6>ŴY v hG 9Y'S]]^ :Mc=u(Ƚa'y/ =e f?Bg ?`3o쓿 #M+Y*u[q$KR-{ TI UaʹKR' qAD;,ykZqf}LKL4TmAu8LFTk/QU  gia`>ZNCւ,u-,Ga U:Hs8F`%<{=6N˸{6K|sR釕OK׀vt[z[./iRu24QY!HH^!rJrSXyb54aih'BPqѥ/ڝܵ_ C_,<*oL зgvj(MꌜV)~P:#!$䅋LImFvEt|hNwmxjE9ݼU.>2Ŭ*}un>h7_ \Dw&;#ŝi7͎-p)K_76FvEt|hN) ؿBTM <߁^ r dF[6~9L~y!Qp0]yi~K޾yj5,&A`̻{Hd}__1/j+GWq<_|W,ߙH0@{Cڌ|Hۃo1l(1Lph;\;]ObYSGɡ@24pk"pAkDyzޠu8\Lg(|FmVėY>J^q|'gRռ_?^y=[#sKpdn [aYՔ@!7e6<$Nƹٙ,LNȤ_,y758s$ )bR2@) × H \,MsjC84i!UV*r5͚ў2Ȑ`McǻL 2)Br!J21&T402ԳFd(bj2Ġ$4R1@}:7F̷ n.8@B`JƉ' 18a+xwuԄM!BƇs$TÖ.q}?>4YOkvU)y&»n7iBbGX{| GdnF=E0UEžр\XhsWW0;ܵ ;Υ+vʭ7/!B05'g֡!O?3:!\ug@s%E jbGb+|63BA91go_&X@xx5܇0m xEC,4]fw6%rzb:LY9\|?ނ'&{crRUpE;G,\¢n*O2o3C[əa·2Ó~mLWl~PvcŎ{(Q8C ؆aJ1ˋqv ks|;̅ ?Cy0[YĜG+ڮW@9>tL"*,$l^J%ڥ!ߦtr.C`CPNq5j ydu-jd$r"BH4/(.h TV(K;ӤPwaf/NpǨb:f{t%77'L u]I;mmT7G:J]T E[*ks1Y ۑaA~*ecd hEtG/G}f\> ŦX ȼeQLC,oՎ.F\u"5K;3dk"pe.v. fL)*UP\||ckf4IQksX#rHftr'~QD(,zTi֎̬Y;2vdZ;gZ*MSJ3khcDbR+K 3,: HE"~Uw𣅖ad[9Tr.I}ں+ d$ԂRzN3ƭȗw*pJ~ T7YFP^⏧`J踫ja  ̔Ņzڐ` gpH['+""E SD 4O=}Z@Ĕ>-8fVyHZDz˿/8ѧ~K}gsgsѹ/M[XE[IPGW/zM#L-0~+-R ͑m4~fZ `^vc;︟wkNQѝLj> *eWѸTKQ۪Eu&S͹Gvjmw4 r6PwP-Jl Q)v(OuMQ4)j hP߮vLA5 Gtv;OxZ*dӵvk<\ֆ|"H4)mP5 Gtv;'T޴[j6$ &2EzzWq-ݚb#:MQGrD ҟ< ,S!!_/SX ,~7K/n$]roylBY T0N]ÇҖUHuK{F0>JwbI$}V%⎛=hJq)C>LnnްviÍrbAAɀ!5))UVDj n)ZI6|q9ϵ 0o GJĈs1r(lGZ2yuV|LdQBWquAr'$'bSOteWQpzzR3#{ʻ ` m HI_0Eg US??#zvefXxPfv7x* r)S)^i=u|"m屼9E$3eFi~U$'xrA7YTncj5HXnI,=z\qŋdmGXP[JTDF ITS MP#GF*j<%ʝ:x;[m1x";N "t!fy5syBe3Z*zXrH^r`h 8%RmT Qy&D|e뢥yRhi,C9Nxi<9l _&իCx[Z\rIt{-Ku(5eAvAcIBHY"lw0V sXnjjf_JIŪTÉ;|6\r:zFm[ƣ;~pi{i=)5Ǚ_,]3{hj 9B?~Uͥ˷:Y ]|=t% 4f]讞:dXաܦ~xwa2o4 uNe:\PBBI]7WdaÅ DkHz< IW/e.mH 88Ҭ͕^~V];7x+]!ml]woeD 61CF ffÜf_m. c{5km~L)e.OlN3G\88&~ &Zd8 sç&LK%{ ;D,Μ=a矾I#֥"MNljfց ̞ lvON`T޾̦>-Ø>4# j)ơuLչZs'"@P@ CXr$yP!Hё6 %OFf;ϓDa]J x+mU%2eML0⤴R)|(wrt?Z9([]oQi]acY"uUCVu 7Yo)Yoc~|, =0u8B N<|J˨D/~,hF,F4wg'-d4QXhIʆ4vG2v.흑4<) fXapݢl5'`}s;>E͐f=LlIkNG/EK g]{띢xyg՝#Pڨy]w{OyN!8&$ܨ^d?s9{5iƢ&RS iZ,S涉 SC!AԮ2:iB c3F)!Ѩ fZ.o9DF$pTb9p0x(hHNK#7b• m Ě?]~W|D#l@>jJ~x&6*lD a*qVp[@ny͒u[.iDLJmG&*`sDXV?b#,)` Ii+RB`\0CpA _$1 q)$I4W:h=zp:#7FRJ)r6LZz5i(/aɳƬ jXƠ `$Xs"{`zPa7 "Eԛ=Y #FٹtEj<9΂ WS`$E =O`#Ll5p<,:k6#R9xhD[+K3] ?N~E?e%rģlv_آ卓:DGYdv9ua!d #)saK}aPBaN@w݌ i`1h Ip=b ЪHo`Qo2)\ޅԅ<5_ \U"?J:0-0lsHerHM@K{S0ViMBISܕ_^z ^͓ w.ps̿)V>6vYw)"ܯ8zM5u/I.^´tخط{q?{a/n߅[0=d}|wv|Zԥ|dg~~N,/llWrޭO[<51'+%VTXeLœ;Kmy醴 J{(t͏|fC#1Lޟ=@uFC2ܤzf= LiX^klLjY7P`;2K Fn `xVEއy4$ eaBd{TQ4Aׯ_wI׶j/ŕ6a2 `/!_66\ŵMS2\\uBѫ?~fW⟄]Vpd@i`)_N5 &i&j+(^܈/|]Vkz=7YM [ZU࿆0'7(]N1joXTS jgk6%DӅep ѫo!?>E$a)4^Sxb[Hʑj\n׽UTNPc+U~ w{Z0\-_Q:B:$ jjYa0<&2ǚ{²^)]%EJw4'\ǐA,Tf ypd$aAkɡ,XUu.A8geSY`4]OҪ{rD>* ><_|}J ƚ jQǔR,B=5Yx!MR&s5slGA,3A6 :9?PRrÖJ2f1~8xYX7[9J]ttEEI]uQP(ROpK-Qۀ)wpTé#q4y8-ZifPI s3 ?-a;=_)89'F$*uϮlr nӎrxr‹*')_t4wsB^G!5BGagZg3+rΒy:K):W|Ѣ=u' N txM zt6'RR*N K3SJHs]ߤ@EޢgbhL B TB%Sr^?~tg3dlR°g!KUOKv<Šc?2 ?٨Uc%c;4:K tfA+#F0l 8s˴G52 5oB96lm-/l[s=XDRy huTM2c2=^bawqY̲<3Ɵ0ȏ&;d6UYgls4ԾDW4DZ KU:I-8Gca\ vNˌi4#-~&#UyNy`.:b,h1Q'=Y˹#B8K OPA܈_dC<ㅸCeY?1q%c/A!37,(=5W\r|n$ς/7"7,ҎD?ZDxipʑ~]"dst::K K50Co ǜGe}!ZJxVx"ȹ>ORJ7yt9K4".]vuYjWgvե˱\ Ne0B`R˖ॷ'n!amEˣ?Z+cʦ˩}P]N\ B1kZNR9_ 5 Gc+]3R6pUCV+i[Uwg਻V&hn="pV5(P'3A kur5}_erܢyi) k `k+ٸ15oUl*I@:4 B^] yqVJNhPE5H.Cc=WIXhCzBc[IՑQlb!C$!},3wp>a=7 fQs޻e;8)ťb*2a :Da_uYSaj7fsҙyȻ|P @7ԕNRfaSF25!,wNON[ˤL&6:,H_6~l}Iq<&,I&o2c}>QdܣQFsV~GbH3"+2h-EsS{5AQ)^Gh, Qk T0My|WU3`V AZ`J .#Yua(3y~q3F"Dҧ˖k_)L;ԯ42-!/t2`{2v~2v fqժKlj{Jt`~&3Dž(e>>oš-<9==8JgV˼]GOZaekwn!ә3ՉM*ôkhMtӔ9Gm{l+TE:KXuUP`6h-M[-) `$Z \{rKCh-5߀RԡfJ64cJL`HRvq#JupqSzL3CE9(TtO~\d2dV/f4tuj}73v4#LXV|]>_ߧuuskw'zƜ0,gBj{X3E$S`<h' B -ЂHÝnH݂/KҏN<%4JWꚋJ($O"2sC?b†P!Qգ3pEBCYx#o.jWpfT(^Kx ºy4Sjzb0a"LK,۬W mn~lZRpX] 9jTk=pPwo:t,ZPQzVHe솸肮zz:4S^oYZ/'^_v4)Fi]R OJ .=L}vXZÿZ:v˰X*~RRy^o9EbzP{;, tf]OPUOmnn:\5"䍇hULI>qfv~٭Щ}FIv;\TnI2Uhnv#BxĔ ՠ-7ӭBIZr3JL:Vhn#BxĔb#M3SnN3JȧGn/7ǔxO5Y"}nj[Qq/#.nRa VWRKQB~C?q pGGVѩQv/b"jB'$;'jmZሯa^:'.TM]z)HhܬٽEH@!t,=aQdv0UOY*:+evZwsjD u_3w{s.ϟOE#2LFec?__wKj/1nLf.vf#V =ϸ;ط¬st/;j+{:7Y?G8) Q@[簧SXBx'PJUL{:#۩6e!x:x \HtBHԼ#ҧy[EdTS^C1A"iw!,Ŵ^O r:T-K 5UR[AxŬ4rdaԋ)EߡƊWwfњ8ک1tv1/yJc69NH.Rl:Ku֍.ihށ 7 N#̓Smcl Obpb,c}x~|JɆ^,0 zmlC6<5U9=K2f7)2qDڮHZ_ӽM/5kLt1dX & A lA5 גެR1R KOۼ_%>P7â3HuHQnf{|Ok7#T]\/#Pj9%$0wFk}:}A󴻖M|H_vqNޝ' I}]Η4(xp NtN^*C/,L|YqX&nC\ JDX C 4!H9*58c RI̭ B &"S>bQ$h-Eio=h!DNh_Nq^V4:QX|UPRR) eC![-k~b -%6FX2UJhvK@fұHIk;oj#({zpxqJ1 L;Psn)Ѻe\.,gW)1b貱 kS8XgفsXʼOYajð3M%*B!(6{ HFe0_͎-̜Dr)e;Zd+^ً4@yL[3G(4*V3j/7,O?'b{<1XV= T/n˙ v =|$N{` Y S:6Rɷ+[n6KΥT!( vɠ9먾:5nm.ZD︧z(4\pCcZ@CO^ڏ7{`Gf/,qWM-SO50ezs}Rf4UoE9Z0u,@@xmYBmint̘O}@Q&=3v6t=ϸˋ9ż˷b䆩 gs$ gBሽ w_]uuqn2޽Zf]Zx`8}~{UewO5T*0jY^6\6ZJLg 1]ms7+,}ڽՐxQ?xe/uvJ\,#$2I)q\RPJ ɈLCtOi#T'0HI@R7 xaB;?=;"JU B-Qe{#)Q7ͿR}ȿRrdSzCʓXaG[N(!,,`y+t7{J(H,k@uYcVP5 UZ1-(,:mQBLu%5$!I2Ẉيl"]D*X R!$]Jά\fZفQ|reP.Sy@۾9EÚBV"1aWJ [~҄fCF1QBx:`_xm1@ZS>H:լi8:,䕛6#TU?Dѵ<*y7NfJ̹n H\(NXF*I$(Zq$cj:zȭ,3KUaQzWJ9%rZ*Q)k8c/L^g}S EhʂT\~ahi |y-Y_V٘ĄHW01i8Pcb(cJZ)2ˈ+j)X_#J)<,7P-JV!<KZЊu5#bnՒ`4*ҲnLfe9ʝi0EA )i3jT6KQmM\ĬT(8 ÖIq醕2H׸ABXd'"v*X@x<")H!tGzrnԵ6ԲQFDJzHѺS7:x`%@7-Ic簘^Q͖sAC4LL0[ஔo;"?.ar֫#0(g'z}г}烞y`hHnL h$08hJc,JeS,`jGvbL-!!A-eI4Gd9|v ow2{Ž>ykozfܻĽy=2xǩga |H Sqd!KKŵh'dIԅ:DQ)؜(R0J\SGkl#5\!F"Zɓ$5w1FX)2XTY&Dyd/pY\_#τh Q>gKgX0X/ vI-ADj.1jKu"+Ttdg8!VԔHS)ZdN,Lym@8TR̍ŵZ`{6؂E4V2iKN'Ei*y [d]"!X|ʐ\EJeS*og2G< p=8v;ܕX19󣌞YTM&N.l[$"B$??=L迿6E߁ܼز*Jaqm&) *hD)X2~-?/-BH^˞7oQR@p3mli!5`ik4jyqWuƆBT* sh cg*hOk"Q$ClMمƈ缬=[ШR~jl* JiԳOX[ tm<cIĺMs*ׁ2YM &,5bqa~"k_t~3Rdg2}3.u/` IAd)é=a8&K ϗ"`EHLI̥%>8|2a(wC_;l8{r7:|$ \ϛkHRr9 !xk!lii㱹PK`8Ctt&$?+A%@t Y2,a9lb) 8LRRc!&]DPɔ*[9Ԅ 6d©\jA6~6;\dlԥnS=]Ė7W?emp{M2B5g,%Tkd0`s2Ơ}Dj>vrpsbӒ諹W68ףFF݅Vb3X7n--9)`|N t:REQ>8;qWEЖTRĸYfVQka,!rmcVeWb1S#*jTf5H^$(թRk 5Xa%T(#~]:gL\B?m4rI|1WX /5ǝ:zQHqэI.,]tl&>\O\_2iߠ ,Q=",XI[[? >^i8-cF`t(2:2 s,rץLJϵ{FEzHUT)7捥 ;5=ĬVl[َJ.l$5pk*S `Q;ytF!-MXUeDZ 4BmN{{>ǐl]A ͥ| olmc.w>lEjˊ"lU BjWnljAUVөGﶽ;#ޚw감Wnl*ǪwSx'vU11}nیhqPkޭzq@Vrܦ$O5qGMk)TX\  ]p??&˥ zwMtT&BsP%w\rVlsYFYc +c5߸Y8?+oE E/Pݰ.s%褆*!KJ*=\ES -k X#d5F9+Ba;2.5}"$oO$&_p{W'T'/(б6]}:AXTYayZ6X8J]ydӃ6aV>΢rBB겔a_nvDN1o 6il'zu%Bo]qCqSz6V2㰪~k2Jű/qYVG7x \3jFm@L⃗X)C8bhM%u׻p!Y!C4gښ ,P uLdz¤!9v˺Ʒk/Ž/\2EL/͸w={{>_(Iogilb8P,%ubLrJ鉜TJ?QbF0Vēr9QFq*Z*ZVY8)I$YA_ `R )cD2LVX DtJBK$~@)Ƅ4[;[0E`Ni 8`LYEѢ]"kh.K8C֪4pb)R N)όccˈ¼Dw)bTB=i=O58ȄjBL&V.F`WI&L%jVA )Ƅ43"~iҕ?EE&%BBҺ2FYTH|Za\W@&'3 &Wg㬞C qg|< 5Mpѽ_=**Dl ,3H1Fqc)1M* H`Ys-zvG5CY p25N`>Ls[t!ώ|=a~l4fd/#0țc1ͱ] {BK/Rggό|:ycB5 ~>{*52{/CN RlNgOV(O}ҽRh {y{]r/ |'n>{*5@*߷aVJpGե3Ƒ7~_~pQN8 Sv" t*r+? Pk lT4lI/?l%Q5@> WZĄ6 oa$8ؙeCn˃)e]IG} ޅr֏@ޤyM"5JmE**b%1G pkCZ(L-9x?VP:Jm(!EزXYLD/P}Tb&6B4,Mh]; V?ܘ 7+8^8z 9`8 ?:vNz/wkwlGs؃t3<M0Z<َ2&n<+O qѹ1A u/5ڹ\<7׼3%~XQ=9s?0}g?vh9sΠ~&l7?>cdn'3+ֲ r1>ce}d;g3ҡ L90[{>ST'`mߋ(&و:t04>~u']۱q ߣwx ~~۟g{x~dewy~og'񝛽pp}ko~:/ݿq??]89yx}%' ng/\y nx-{ l4te "lsRBus.ή~\d g߂><{QgZȕBC-H*|lo6N첝Mm}]Є2Ǖ4 /30?a!~jVwX6be5Q4;lZ{R%R2~W8ݙN_ZfB7`iu' j2YG ecSAUG ^j a^>`VNta״拉p9+gm ^:z|*Rph[7s/0@I?=o'Ggg|_)H/>ްa(귉Q6i4N 6Fqgd~9oQ"8ziʸ~hp_ 70ϻJJ>7Lӕѳb.fc"qft.IZT?RW2*)g:,G,w_Yv~iib"sĦ$Y0N- aJRwTZxNQpLnwL6 $$qi sm8iKXdsd6A ӻ ۗpT(bIWQ$q/`ml~%>QV(޾H3ٻ r]Lvnnn_XJ+^}1mb$@Udܔ,Nm&RXz0f\]V̈́ko9t9sb:}2L960zK<0 I_ Lnˑ}@d%}9ȎӔ-G5wh"Q%%c;FQX AP*;9=sCĴ$7 z[ !z tъ!?&\!?Q#b]$,/ʦPU}ؘ=CJPhyNb({uɸ%2vdB+ץ#&Kl%ͣ+H Q {4+O3Tj* =Ƞk&,r[AyͳbA q>k,n󪯼IS$A69Ck¿4kM7kї_z-]_A.nIN[Ͳ[sk.%m.Udfx[3ca]XzNt. B%by PSqJELU+ҕd<7';)%/[QV2p},7!&S~;1Yİbsmmar*6PRHntКM%L# <ޕ0皅: $1+մ '-;X¥+R*f$KWqqt5VE*g!.B=YlY%s~gbKNKgm årrЎÅ{17Nev0NzNݤhk~k:$L{Ga{NչsPiո'<q;Q}:O5ł,ژ 0Eq/6kƏozdqФn@ǜs}Ryۈ)j+=$RgNM(3>lQ^:4_4xk2-:1唕qf$'\2[MbP mcCˎ3=*gFV"G)uݠJ-/YD'oh`-yFV"Kl ڍTVjdmlvS(/YYnk 3j^x?jƔ4 J9dˡr0L{u]u=xM_.մ֔RX úM̂^fsli? L֣A?94Ņ4Q TÒjPX7%+"G\&bH44Y*[طHۘUNΙ@.Ϭ %}Kl2:/+D?r663Tt %p^:{;_3B1b@'_opoJuvF+gID1 Xb7YkXX_M[=> z[pk~5.eY^ۢjVFQ? bVj0sѺ·mRp6SeWEQ]0Fj-!h_uSbYxݠ3IrnqppVͱRwBI&5EeJ0Rj 0Nsz$өO߽iY5CݨviFSdžsoC $cA ۃ:!=l>p8᧡AG)];ϫlbrl'G̵+nִhL'3;P7 U=>eDKGkqq[|q; I:?󏿽wueh{n~Q'dImns \~ v G>o Fv!moYP->|mC,0]Oo0an]B%LaJ=>_W u_xGK_xp8,4Vdq 5œn'čQ ҟMo4ck8)Cܵ`uNG~5rIA|1}K4@Jjdr"3 jQoؒ.V(K՝PC56>NTyҧU~K~ To4b8ujdnTukg4PW~an|\KL¥1D&hg`x^'~JUWw%~уcyWUx̽x:vv PoOOG>0~O40`${}>}u/{ c/~j&xR8tz& F0NETn#Q5OaɀˆC9|}o~~56y_'ei)^M1tAI>nQ*0}א24h실Hߑ3쌤Vw,;DZ|; S}NMzcArjKVI,$yU|ʟsf>Mp:e]A;(Go+~|))X*7_`jMk2E`>L>غRri%*80z9@$\ʕ/cķ=(C \ - Gr#*R lBHxeWx[P;ܵ%uB11.` ʹ8w=͸ܧ涔mτk^5tPxK8ܴ&2Z5`>D́ޤ7ٛ坝n^7W7kx3tc5-,- 'MI T p@L.WIco Ef's;Gy(lc/,?Mywd+K-6e<]?8 m1KQvV23L$5X6k-I `3?l 6amsZ*Y29lNjqrDk Ftn${Fr@&׿hST L{ ] h&gpCd@!,=vNQZ^(X:{ ^kKbe}|\^rf.'9NP)}3F9[|wt)lŞ zVBcn-0xl˻hXɻ+R*eR`FrV~hA¸@㖟cU2gG?T.S9ҍWĩsxVq7=dJ< &rwz]Jֺ~n&eF#ʊC͇ UQeDCQ١b@UW_(GnΐJϮN{JŅ(6 xj5.S5nEARvLk>[r˽Sr/`jPG]"z` >r,kMTѨ3=GuP덆CSzѦDj1#sգ N7l MWe`9qN$u[[[IYݧ}pZƗMH S6f#YkJoks&A1J]NuV}9x.Q{yo8v_3/fX!̋xh+1gC @M 2!ܸ4?Ag"̂aQ, [`".|+ C\pGwFܸ2q9oC\&ֺ̄{=j6N&FTnz`.dEl7wOq:…R 8uF@ʣL1Ƿ,U>.mȃn$3ꢯSgtj ⟺D̶0ȷ%ac{JD+,ТL L*3"A0T#,l▹ >'g.i+tAi.O?Y-mm mG''v:O J)&# of\O͆'vFr ,T\(}7_ݫ6Iӕy6/]tz3!RQ1-/"E]D@|o]ioH+?̻;k}k{2uF`<6F8%CXc:IJr@*D/{H0Q0@~)8>IK\6>> +zJ *|~K@n*I:<։]9G}"6c1BX{YΑ`E)qYbӊ=@CI H4J՚J3}F/!^%ܾ˔_WpMLQIhS=|w~W}<9Αg<|F L$`_¸3sc# e >Rnu]ѷ ' OcZ'0Zs(AyǐV&)8Q)L5R߮\45W|=Fn݈єp/` ƎVt0ڍD+VSyC L_Oۅ}d/'gb>r[Y_;_һɩ8=f9L7Osss)+anUV]YR9n#%+=h`3O%%sg`Vpw٫X Ė뷛i}dpt{`8OJ[hn{p#!wu40 Ew>qТH.`ʃܸ\nɛ7dyfn"6!ypR-t.b,G+H5רƧo]|%Q>T.] s2>d&# ޴;@lUr x.w&;i_ Nj&nZ*=2xpH13K,.ձ=XDH#od^̈]-n9{E2WbU5Mzt %224)1ZdDsJ[E,\Y+="2W+4Lj6勬6ķ._~A3%ZEJ5[7IM}?6)$k oGf LlؚpK֖CLwc[O7&(!;dJv3$ x1~2I޼{bLg+IU&/'tsbMs\(Rq# $Q6w$QzpΉ>G̑HjP=3 h {l4:l`:UxAǙ1XWh&(C]RH 0~FYՎcq\!0#q"s*9Rz8&FpLjpʐpjebuEu}s{: [?mO!%$n{ $f&S^^_@,y0 0i_c+;y}sю8(B>ZBmӎ YeI`ðcnʧ}^$1J{2"S(<_ɚ ?'B9[ P޽UH pYKGR]"A xDW֥ ް弆K|gWQӋ [ZOOfM.e!X Q~9 ~2ƢRgamUa&;TU#BsU߂LSQO%Vf]R\D0%K\+t>__=Uw:{okzTRngp9*L9JIL0dDRbOlIz!yDjLٞÊꊋ" U"fv4dapY0Z\M%uvDk}]8HQ\!I*-z I]*\2)T/Β6U47PH![ Q'ootߠߠOX"Cp^™nTg&EzK1Hc!0.|`.68z܉"KUdKBϯAiRVS#_!tGSIh..{K|"y}BJ:EZI`%TY0-E-f($K} !<fK~BZ `\"czG|׬\㰾E;[Jd-϶9[JD"%rrcUҖc,kpQ} \=賮7G2G}߄~􈶓мI=Wk/Ry8?BL0F N6=RJC1" Ok܄tt}% Mc M'H!X'ˀד>F81\+A )a㐀:Ž')\,eZ错LE)&b }궷ޮn{zdH2OzR:4#]`>eR`aWPfq79"p#eJV2cP:5f 2sL_ӝ@q/U("] ]6>tLpׁ~RhQI=;8n$#Ԇz،{^V͛N=|IT󌼽9" Oy71f01m6r*nLcfV\u}s Yh9wۀuQy^1\OG]e[Trp1a޸!>\$6(wc;' |Qǚs4VXIQG8QFkqOS}3G5Kt n(VQ;_c'Po 32=i;^NL*G:p5NذZOЙ 8ƻAPmM~,XH^߿z_둎6lqx-;`Uih%t0s<`s#ߧkz j&;@4:+oRKPrCk@W)à%L3cHڸRv=d f9l<=)krث]>;3&]ۋ8!?wO`U/ߐ'W}]zlQ{0j#"ddU>>^7`$aVxqWq5 T~i9৑C;> ;6J+|Ҿ SyۑsϜqC akg-!wŖ<3' &U0<0IK3Jdٞz#3Vm ߲x@VsZ҆y!}*aChxbi}bT.*;_:ݯoBAMqGsJ [;?c<(yxv0" =!FpsվӪy~>Vv[Xv'TN3sJ~W;*ٖ:!"Z[7Y[Oܧ擩<52Nb_ŨLffa_h7vNǷ*#՘L{5/Qo*<38:ZQ8궁xs9wtzS*ðzN+j97]7n4~'{ӛv/?^/?_>|o54Ts<\  W4hpO5pr8/kAʃ,>\D/ꏸ`0uӈ1(}xpxWw/oa~0ՇOC60>|mɵfhyeo̝j9Ͷeċm'^ ×mB Bh:ڠ?4 ΠQ Ӈ]ϑͷ(M1GJ &_=E0 t\o;0y¥ppni]NrgƯ3%7]m?>[- yGfK:xF-.uik?D^?G|zjD/qmZ2ENv)~zoM#}6-O3)zhf';3-{<kG`|ꛧZ|x}ksx`EDۺ@k?̜xzلn˥tz Pzwm{o>1W*Rfgg) GPѯ·n7«?nxPl_ ܄0NT0zл~w؋i^o͎&ɀ7}]~ܟC-pZ?u<3:~|} hi)^Ƀfbطq\t2/(1Q_pa0/FE/2u8+"1Ym[iM>tN;E3ii㉝v:M&SYR$٭."EQ_{uD$xvew9.@ں)[:D4|lQN<+C7M=~6˦ƫp!z=YWO32"S܊t;O^.S1ZhT2;tu"էO*oJ6oeSXd,:pMFk@ rJwgŰJj0q}UZ/b )+W0+hZc'|#7Q|ӹ6%X0놁:2YTEg8í%S~4wXkRʘ\_:[iY-, gqvi~l%3/PVzx 7#7H(je8 x.ұyݝYGko6IxQB1Lva '.Ii֝IcIŦw!ce]ڳu]& ڍ$P6" 73SC$f8-=@TQ\3!;kPlB[c:>cBÌ D@bj:bsta7°6/ 6_Lj7n!O*eޭaٹ2v` .@& #VQVE2:RlMb#XhDXb:DHs%2*F¯ZBci\Ő"#XG-8<>d1r.A3RgHpІ#NnEƹrmw@ T,iBn4I4`XcAK0cAOQ+ !RPe?d+T3fe:_M2ht.ʇ_[2sHev0T˙Q`fi]& ._,_;!R: !a`i|&2/8($J!uS3C N+ CP mLq!DY};r89#2F%N=*cBL\Gs?ZgT&=ZJTdX//UJMB 6l{P4)[r/_aU1dkx2DL?9uC\:t3aLBʬ`3"#jX0 Yu<Ԓ T{9}a8gBTѽN[ U\Jg-1]߳h5R.1\. 4mUߔTrIf:9w 0 RnPs!L6\LE_Jѕ'}Ҽ)EK1@JQ;bNr$\jBc\o}Yy)W|W\렿5dDb7?)Qk+T^Sͧũ֑ݘzsAp{M=?:5 nKW[IE"rGH|1+W); (&Q sM?4];'x2%j-< q1aM\ٟvSLN)YK9LjY~U`ċɒb&y p\Qvs%x+?=0txEE;˕JΣ@fZ)sxИnVʘ]ObRh+U2#v0-cfꝀ_ n'қY#(epߎmK9|[.oBւo%XAu"*d449SQ2/e!Gy`Fʒj;sioM5>m;I" $R;)" biD%D0fՎ[]jw#!L.-R-pQcͰz׍11pЏWl3w\Gc <{1'7֑R=7Ln0L|Ɣ/wt*^bc2x~"6OƍE2h>(## y*ZF{Guᖭ[6֭,N:Xz- jVy.4䉫Nv.R/Wvw& o2©3ỸC *]u S&uL5`rIټ EH/OK[Uz(0VBhNŤnj$+I-ЕD΁cߏv=jN_\ dk9% X>bcpԻ^ vvմbqh|SJAaO(<N?>ޣdξr wd/?a 1}_owL ]6اO7 (A﷣wpGu(I;^]u ӄxnOFgGYyӎocxڣ;o@Znܿ2}; .{uj2>z ny]㗟}3|о gݛ=Mô7GfFV_{ިC" s;Rxr6L'eiNfлO!%,Ep>c. {d36ȍ//d.; &vR67|WdȪʰ +{?\ҐVxN4sd}ɓ$w :|Ou$ςkr摥Y 71I n^>=+Ci@CD plj1}NDQP|ZJgef* QX ( $1fbUII"uF3mȵ=6ΐϜ!I9 @cÔ`$ @\|4*͹ιɱfsʨ D)Tt[nRZ-BCN,mEB!&gA" j%QPHqbYDA [4J(տ-%RsŔ! >E)4XPǑ8K,#i!R9Ne(1ҕcFd ܚPI9ż +2HhAH5yf'GLIď}%tJH7iЅ~ͩ0XYɨ[ ]2 ( L,txr,GqG qCĵ+*-GhيbM&Wb~ âיHHkäD!kl @ Hj`z+cLL#U< W|l- (,ްcɹΤ@/4}wn˅vQDC*"-BVKpq)gd0RG,TZa3"JD,`V2r)3.FFt846V <3# I%Y, pD8r SFMdH`]iٖZ9BcMLyGדG(XrF" ڄ4̞dt %FxK2ʂ:&SpGM@y#$y%";^XMB!2'/ nvbaܰ>:/ǥ0 Mgxu*Y9?ON~1tǭz45I&ѭqnm)z%Q{('֯;|OdeVUA*H'.Q|AK9݁aa+ڷ/wK%h'5Y>>ݜů "j)/;+O q^6uQT]Ӽ)n*t0`!2f#,|trb6l̜UG2쓁͟p=03(if.ꆊEd`5OKRJ7cVJG?[8HϦ@"InElg cZ۶_Ci_S|8L8;tH6'S7~;e*I$ .M΃R++Ҹ1?p;vBzXt瞆Ga~9e[?{&3%wmE I"|W ӽ:Պ5ljZǎ7`" l< g/Q-e.+TDAl3y #!H$ŠK$PT Qց Xr_DUay!.vf͜SNJi !nb`lX1u'ܯyqajL?tf6#wGA7 -hA$ !k䃱 Q\Ģ%^@DH0}4ՒbL K|m<4/$ݴ;뤌pՏ7fʼ^؈)WG B96U[lms^^ac R`qTijU3.!ߙ^HJB:{c@}ItS:y}C30׵J2)m֜ϟ׸CTt\vҦ W>LafG/nn͒VǤƖ6mA5`Լ-\q0OٛAup^nb4ԃ˾3-9%W2vIXBA{>e?*9\T j>z&+h)iBQ'޺C{wSӰ_>$#: mR"dQF| $elˍO8z6puv/:VhOzn$SzdFެMf?N)4 C6 +%+Y9Sh i}&]mpi!=\͵qsl:n{\/Ø]Vڬ2mSut^ٌsiU] ZɻwåTe^fF)KWV2syss9O043˥2;^̖gl( o\꣮dN3e_4hxD[T?*avBS?ù˥pڔ*2rT‹gWPR"1R =L # )đ@$l)gPCL@C~K kOzi;#VSM` –ʙXf3S'00̰Dn\+F n$ežI}oZA @}kP(2;;(׷74GW5 >+=fj52ZeR೷0Iى8xDyӇӻ3y3Z*h㚳NМobŪFǦb`@Ƥ!)W1I0_JaiO\ Ka;ex)X3`f_FT-ÏD7(cO0DXn |v8<e!9`ϟBgm:ƚ (hLx12X.8-q| ;/ҨK~` V3LĶ}giը|͠~p *ϩf"U؟E6%~kR#3~m B`cMmZo )ڨ)=r|,?ðii?4 qభz.6dL`֥~/uXX6ՌT\@BkFc,qcz0YC[Xl%CLQۗybq4p~v^ N:Ip4f^Z!XvڦV3͸SIk%J<q%dߕ!u옽0{A} z6CҖԬ4X:ԕ哼 3)S(F`I` 0.c61hu nٙ76eMHP!QB,Y7x)Wǐc`k^oPp*g\@%ADS_`7``kG׉ca݅C}r(c-+G-gQK.c-#pB Vݮ}k]ON1ARsެyg]GιEaaa;$VrZb眛6k>I@ +f@2a|a@LtLS1 Q4 RXٿ,Zp00( X01$ P+.@Q-hG!Z*@*ЊpeVw euKI RTU/~O D VԼ߸]-8DXӽDQ=5^,5*WsH,VUmUGjXJKw_:,խ!{3y̭*lbPJprP=,ǂ6h|tw7#p1Q1:313SAjh:j=l]/K] ]8:9w5 &AG ?HM5n"=* }Q/a|AVV"[R(v4øA(3*Kҝ~eWZըcF@b$am]XQ7 И=A$uhD͘^Uv(lirQR 9;ZuK#4KAy[W`(PS;rwldlW*~I&M-,BXgX#P@ iaʪk!^!dwH *ܣ?Rr)ysntf%S~~6yϤ IvԮ^>9sZVopc*%q#NC{j)BԽIfmNu}<θeڡ9"(t.W"{X$Ӭc֓\Nb=;4NDnU}*{ β|N ~TGpQNkKI$PMQ:p]Y<)x'z_?-K+Wɠ}7j?Ozi|{oUo\^cw =tn9vu̽7?_7~=߯O;r?lϑN;ߤΓd8ne=EZ)?$5?pw'N:eL^,X~w;ab5rћrTeߘt]Cy=7<:7bvoWzpt]KoM12#}21S@|$CMgHթT%CXoaXd$Ʋ1<[ҬiѠ؀كw^o1e }T>I!SN뤕4T jy$C$oI7}MYrq|z/7p/ο_]2Jߩo7yONwclsNLMom8R~QTq M˹NYg]OfU6DR/ |P+#B$+~l[b"s?!iƬ1^L- `ܤ'G؃s rbg{iq07㉯ID>9QQxïvN>aI!:=GVfK-vB[9wa˵+e\$PǜDXHQ1Hebe#paD$ƒXjJqEP*Q!rD H$4?* jmfs= M%w~}/wU) 擗Q:]ƦŸeA1 ~\W($Fٟz0 M_\0W/VN0gտF.{0m ǬŀdtT32hIu\ 3ߨax{Q1*e3I(BW!@78:xo[5'p^ıGnmg+Ɵ>}z"sS{lVxL1 `L>0Z'<&(c,Q* rj&bԬ7Pw>gc5{_( q*{f|֩{n$JA'VG&Z$qDDE4!Rk倶~;YĽBRkچZnA7RRBH P'5TXH(WX}9!D/'\QxǷK\U{)d)d;CKz~M)b/38XOKK52! 5|):1Q"IFH2hTIі2f>87b [G+ct|xܰHp)oE> sw9_Â=&Ѧ ֝~iCN$5`t ZLxC:[0䤜(q^.\֩4N#`6tpGMZMvXش߭X/@dC'4yͲ6\_>{:#P:u'Al/#V%O"_y7n|w`qaI"*B1+D,1 ,ǚ , CpyI_LTo2Kڝ:~~!Nޅ y74-3\鮓)]NQuhHnP|C{9'$΅- ZVO%]D%u% qc1BZD&q8 'Fi6$}XTHnNiG5vcgim\" QZ;I (a rDpͨav?_ QyRrH%]2ɤB[i 7H}I=XMȤmۥL쟮T @P_;9UaCNyik!=Q4wBF ?YԽ@)鶾q?Zym.v;j~h8TpLiDcݧS ٧R)'Y>Eqw1Խt_2/'^$/b]ڧ݋q3w7p_Hwwn@_j*NӺ]7'-J5W^&t2{u2Z _*P8t{I}IFq/O }>}Q_QA?ŝe`o8w>{޼yl<{w)Ws;T_3a ~t?}x,^Owhhzqvk~.%3V1O:$4oInqnq^F.X#׈bWa/.(JMh5R$@| TsR@:u@+ڐlMC'9U}z)h.1KȓPmՒxzہlxDNON<325 H32 32 eWpɔ뭤ʐ,X;n+[{zVM;vH6t:1g3IP߃Vε_ܥ{̻e(A88"H16 !׵g[i ..׉68#B5dFX#DABQ Mqzֱ`$%dP Cy&[}q C7:.n<2ԏLN,nݏOfi|wA׍Qz1 5?\+Op`O of#ڕ^ۣ ^[6F`r`kkF`X@wP.S^k$3B)p$\ o ՜@+'oDHo)!>go7. ;0µd3L\^Ix2Hӂ?SMwjꥣGWcq%pZeHØ6p G\4㙺' '&u[2x>sd~<)ϟ h;kmv(4B+vnRacΧ /uuŎBNܑ?# n>'56~|rs,хzE/&g_]˥ww9/FJV\8[EIeT*3OÅ#cP1mpxVPK0˩M)$1WTBq̰eBK0F&%&qí!Oc>@VsƃJlMgN`GCFŲ?*3WH([FfP|8?sʖ]BYk?O\μ\/xy6_~aUNsTfGyW|JⓏэXb:e(ݎκQsѭtEC[>6yW6셩|/چ]?Uۮfc޴}`1|6?uwy ;~IA]8?NrV~`~VVAAK<}VV.W G-k;Kr%&.EZ\n, N7zk]d2v (zb).rr rQ(gz4S imO!%|y7 7 ӃJ (\B7fbǃ$=r<<$MMZU 3" Ds5z {6ր-j>]Ԩ yslDf5'o u1 VV H8ltxo2`F*0_SyS@T3] -ZS}FEG 9WӯWܕRcdm-aM&zD0@1ݓHċ*m[^@Mܴ wh[U9+MrLmw uAnY~Q|K@6V]f݊0r}&Piq7BEZ&Xk*7`44` 軭\u `q@w<@ 6i0:o_}:a}]It=[] :CȷDa(:}@Ƿz왜v0T(QܪwqQykn qu/pt*(sղyW&3xrw/w5ΛQϝm@uRA͙Ce`?+(it4JM}_Ǭ&3?i47wBDK[2M`!~ͼUN;RiU^ggBn/kg>?_Ozd Q'saqi?階W\c\`;5Om?kGJ?껗/߼~gY۶+=mMg8i礵'vΙi`FQrf,e&)2m" vaV1h<+Gn0c__ڽyu˧]o+}}0=zk?N`vBt7ݛxM7u&ʕ=ӽqn|q]875(\~)FSO߻q5rBAl3w/'az~h043SiC[ &5y@1?`K=A4Iǃ%`o7 :ܵDZ!5f] 80@3tho^{鼗6ܹ{I#X~l 69 Y~TЕ{~ dzho3ap H.WWvtg_\|lo_{^7;9uԺ8P軥_Ng|]<8| >ɛ7_?`D-hAgx8{O>Li>I$}<U~A`0>!# Q+.3ē^o+gidz9t`22"eۏ"D0pChtX>OF7Q߇c͔qd^_]6L?cߙp.3> ,v r¸1No8qHMũvP_ ȶ`v@N鎬mZ{I/GNsGDYE,fs,W*B n8~+M-0LEpNԍ2#u{{|ӎ!,%>n%C[U+6Ǩ ;x$| ?|fb 51$Hr!u 8T+hJCa\ZHB8+L E֢3S$F3`zqrJűX,i\6fݵS?L ȱ0n~V´u+a>Ԇ.ߜ^;4sfgAEur rϮn ~Dd\w"Sd:+j|LQ:;XD1us#V#g 8w:CN3(a SFk3p?Ȇ?I]cw֔(=~n J4dRf~ 2Agr \Ѷ=5Otr4Da'q3_ת%{I'):i,'3tc,c< >yOJε>YrXJFS!q8oH` ;E#ih,f{|g93}+Kk$xtIDx-C޿a#*TSk Dj\,m^횏س'Ѓ7v̞_yҀ^wZxTݵk1SեhYM:.wiOo4z9(gL` .fzLdhDdYX#$$6֡PIP%(RZ%[FbDbq!3)D#nS$ڍ/% F #!16dx,"4@XT\1 C'3,0!rsvIl]˓߳%m%/`Gm4o%1 J8ښ0RIc@[P(%B*U+ѝ&hh|BEԬ@jZ(!DC*TYjlHI4 +IBLd2(T-ހ]y2F#͢⏜^׿ш ?Yy[4LIchޢy'Q+;#A;cx t:k ȘCD)f=~o~]tu0VnٶPK놩UǔHN`͚f7>E+)zO[UL(ݞOrݪozr:f7>E+ͩUsivUŁԩ2w>Fݪ yr:f7>Eg"cmF^fquM\zמ-m-ID3%J8m$\h-l${@%e`2?oY'ZxC ؟cHAE8ORv+l@>u{ ݯklmGXYIi0dE&$a4ĝoA"&(,\g'q"Hď ާMT)o#8Qv|wx_`㛿 ("D@BX܁^oFVQ0e}ܷ 4.jFKU+.E>&HH+1 ņJ ^>=lZeψu30V$PQt:E D'*cGG/=/t<:٠sm:]K8% Ќi D B0 A>(Lcm@BGK840a 2BT&4V4J)54MQDjx)4 p0 o{su0{`I>RB+%*O*XTbр0ڰV!)3Bdv0;H2!2T\5[S JV8|DFIFgg69b_s*4<} {Wh[Os6:~ƾr0jP"4EZK[':I2ʚ0Ly$qϣ {%4j=a!6eB\'aԆqq˪c4:G\ÞX\.]UJj=Yf'~.'m;qBS‹q0 ~j:9hmTi1_DδYl֕Bs℁1-[+a"BZR٪4 GMZXbynHyXF ST iUj½W]q($Ҵ@m2M>7$wSU h3Pe#k1-jzx8~98ÁgxHRiiTAWJ„j:zUx7ia ;^l.WA>Wyqr>`Gwq]o44ߐ^H4,8;-ǥP^Qht91wl}m,gi=MfCg4gxe1}ɾ]%'}_^XN@^Y;|۳A9z?/ `zH=2uicA[n&A5W1HwD D^F~(i&if }+md#.Z& NEh9ca5haإ ue9DYvKwwQ1nZ n"*:  O.0ma Ҡf&-,c!ްER 8NF3M_P2ftϨM i @ex@Bcx['!S3%%F yC\M 3MچPN.<|h/ͽ>n1Km2B6tC2%E{\-2EolWQAZ l{0>& 1BL ླྀ /׶UVh-btksL' r.\F•s 𛴰_̴^YS<)|)"z7HܥA mt^0v))ХuTByp"Ӥ6:B; T&ni3D278* :5i+EޗmhӠƂ+Trٗݤ6:on#ՠPuHD(`*&-yz_qXk޽R*d29(CLuHA : ;dW'5Hx! ,x ֻJyJ♊.&9!Qk08Db@"\D gT8\T%XRT H j}P[n\JJkD1"\b*F-!EYN 6͂1j(JiV藃6IY@sD:MJGAɮwN0 ^kãQ 85GШ6k"s@͢3V`(Mdxp`s:HB ;mXW7rA^F4z Z @g].OGHdԚ ֔|(tsL6+ :cd,NnAҊ!p0v>T#dQթ 9žI?n"PXÞ-]ak:C:pl}iqs]9UUdV>ZrVdv2:u0VownQ^}#IL2Jr on<݄ȋy=(si) /T1#)Nõ:4c.U^G% zx*3!3υKv~~"!/ZݩN])ѺTVOa?r~3{ۏ̢a=/򷇗|ܟİ9Uuq"OwN3Μ~} v+d6̨?T`3@<ӕ*}{2`'/`!:!sCe̹Jv\Y)UEWn]{6:Aǁ-{'IujQI N}Np2{,?&SlU/?^ O.;Wo^:7o=<ګwY?r;6n}ٛ?C񋗿Ͽ_v;meyܙS'W2ϯ )6Ս;uN*s~abs՛+:S64^e!;{fzhl?V*r_~q>Ƒ{iJWj5l~VNG'޼m˾_@4l αS8n%ÀƟ*>*Z1tP^w`}bn',D^'C19%_0_T3*W{reLWwVƭW??~_Z +!Lh#]9GA^j,c@F՟VpOnݸxf+v,^]cz?_y{t4韢e{EqT= _Šz4|;M?VrI|'l΢Q}p0]ʋAM ]r2/GJjY;>RNpr2,sf.8o>ίB_JyO*9"&qξck[__R.Jz‚Z_)!Wud}QWSu?\i5uwĪ$Y4͞^[dAֲMfίX"r,Y-NowqZ7<@y/佨?ʈ>Z$XߋfjadQS~/8`Ks|]Ja0 ӥn B)-ڸ.?'M *6m1m) %H(qGE .zhHB&-ܻscSI!;g.ApE.ȩ;+Ԥ6:`.G4#ES8Z@t&6!I|uC U 6=jm)ǿۘãڈ{ /7$ꦽ2qգhTO?[Z05#|y iIA =O%^9S^ =|IʄzKIB$%u/c @/;/*?^'X/FtN>j|b?bLv"]'oIEJOH{I{Xmio4e[/98Xǁ&E}ѳ9_jy/r=Pv9!{旷`(Np}nV7eo4]S_D7[!l5x88_k.~U~W !|5-.U/DUH_`o[V\1 l5ޤ.}B$k 'hVbT3c$OP)JTW,$EsoR36@ 7d[ʁ*L5I bq$s93*и)dMt̽5"@RE^DF F7*Ja\,;jΟ't4!ݍ V%Aa"r'-$fY}ݩUOHdZNlT k`9 ToJ<&oV(ur"j;92yX'JQ{K=I KD25:Sټ7} m2roOUɷ-{q^>} rmp[upd(o{Fb t4p2\]B'ShIy$ߠ%R7isƼ9|I37~t9{ƭ›TޙD+`,MjAGWZNF!zc=q*Ƴ(1a.{*ɭ跠5_ed"hN,p{m][oG+_v0}XNlαǻ/66^8AVP" II4=]_UWWUWWŠ ,?xSX/%%[Geft!+::﯇Bq ,$x!kA ,m3Kt#{H`/ 9wr/NyAp #s>66شρY9,NTZ^Gq>Sҙ뙂Z@'}27DW8)fQtęJ|a| ]<3^A|.L&1}~PMP?N|lD[F'x d~:gC#w߿ @_wE1}1cb_{(ǀ]ƎvDddUKz|p<}0ŷ{W_DE~o7s-y<\IE `<>@2$f0V$lxH$ spX7C(c1]7( 8ݳ*,WN W]k !3]/IB%ge6HAՂV0 9O}wJVU:sV(׿CsgZ^;-;s-g2lE;|֫-)@5F$aɍx!Ӎ\S(cXqpBaއx׺w[\>^-($Ac(// $c3Y$S8l Ԅ@6#;R" m1 2 }J ^BpԈE.>5r#3:f{nv8kKƎF c_^WF #HP`Rk ȀJ20)0B?alyU]xoؗRt: $':k 3Zk 5 SloѳEG}Sxbk.Ǽs;z&sÖ% zTUCea3t/>7k,ﯯ@wZ_ Hg1\vRfͶ+N_Zks[O@ ,n+ٳXsd7״^}wՃ (+Gv"~n)MT!˾ x@ZG,k pw0xkmc(޻(cb}Jb ȔЊ _-Uخji{rg* ѕ ٮ ~\BQO;m7a6@݋Vn:h&#O7^CzUTUnNגM 5Kd@C6 Œs dmW9Ae'y(>L殙ܺ˹$#qT-˼Ő -{ whG˘v*LilJK+'>e6gL%˒5)"3ު,f9[39 ql(ǙEd8(h92&qZ<&9= OdSSЉ7 MM N ;.ci`,.C$C3(IsMsqf}r1DI;h[/9ͮ&ٯ^ DAV dY c7 gqR-o rEyw/.բf{޾=3 ?\_FCʭCeH0IxNѬkMXR-ru UHߌ?WB9[ÕS g0TYw,ۼNGZwCoeJLQ=vA2 x.c"2HZYWzf#\B()coׁ >`~uj~#E>ؤ+=<)ئeg9' T^K'{8R̞i7iqi7j] Iw{D): #Dk%ZĀ+w0?oc(_}Gxk8 & l L y'ٗQ/d_FɾJ|7dH.ʝw{cH\͜Ci+ xe+FP,F{Լ+r<땧Fr/:3VF˼.cX:_c&%iP/hc>86 ؒZ)y%s&jCsșh` vRoJ sE &0=c!yn<"(HKqM `Վ 3R+8M!^-,`MQ〘"iiKB)brcASł:b,E&@H$rBan/Yݵ|/ dr/ S`'kKݪ#ܙ׏4l## uGM~y;4}>^X3{49]s ;'O1ޣ2G|-7odS{gc[d<omw8߉|(ke]\cg.1#"FsocnܚʲA]n ^-`D0؀ V㹅ڎ#w&TrҮ7_dۜ9R׈J ᾧ(fLrx75`D)DL+OL_J*(<( "tRAH^ Ń>ÆU;Z E)!|ELVk„:sˍ'ebsqrl ǬǪJxk"+~Cft$'9z%X11'2G/| GL[ ˗>xwD__ w҈/T%/>fTus'r 9=:G;I䠎H"d HJ[DܨK#Âd!A zrLN)/^>-\ Nq7I|ܒܶP@1DA4p&.hb!PrV[),XKj 5C9WX}ƨuNet@qޏKa]z+(Z_73$@CEW=rVN&"\<=y.gq/>hs;4DX!`70h`r(U,WJy800 B"j+0?C 6o{e¹Dj˂Q vNÕ-y&Ѵ,8t3ΦOtxgEF5ظʡ m!s KC@.:i4 EnFTINw,Z(rSGBe~P!m##*a~a}B3$kOZ%=#P.7P?#,` 8d :6xVם0 Zpf=2`,wbpj9 R #O bKDN /x{b0, ^& W9RR'sܮ>}O6:(+lL-pD^D^<[fP]VUĀb,LӄZÌ)J01V2KP"I)"#S§NxP^'{~K-)P$iV: "<$.՛C-n\Pbo5q1@I3m%NCs̄ E+pHbRP !XOJHU ^d6XBԦNqm=OPːG7:PHfTxeCXÕ sXv"J P R (M(MH`;k@a!&y lM`aAc'IzuUk4ېC D8I 볳ߚQ@}ð&XS#S+%iStC 1vS)0qR¤Pܦ#$6@X +]A1>g؇ nzf_ _@2prLM$q tW;uHd)@ٶ\ mJTp,'5bF!Ah$/RE$YzM-CxsM !Img4d\+f[Du,ٮvN%n<{1C'8`Ѩ엒KYc8t ) PkZ*Ֆ8';LZ/ O""vJ-^Bkқ Et.%ZHmKk`K8&L"6sa")RƘn2L^Of4m< UL%X4pm1#TW[ 퓏é+BfRcV?3?#$-쟺exDN+5MkkF// ߰vesNQr,3lO(WJKw㺛8T$C1\RUb9U(+X㹓 =T@-ÒbG4YulH 3v]oSpcm|qju^/-_}[M]"Y+Ee+n+OReBS;G}Pwnetr(ВՖ֍V7 - Ӑzo9O5y> ZCW}s9b$'*@3+ͥW \~t&[(]?A4heXPKN{/A I9@&Ų6~2:m͍˜En5ۼg0Ɂ3^fϳzԩZAAꟕqLFw, lwM[`Og{BygL^wYBj#׫sXFfTV?Yy,mKYOi5FiM}J8b(%2nOJ-p#gJf&4(þ:TDCM) ,Ҙb:dX46OVM#pN㠒$;VW(LZ`j_9;kTV80ɦ1NYL45HyG%4MBN$k2VY  WFv<נD6a m P]pIcd=R8',@ۡ>^"(H$֢l';TKP9bY7)9p~%C$ir/+LUi_~-Eo\,]WJ$%6X qT5!7AK"\_ᑴY>M[@MRKB%e!FX#@g[l,bXhDd0F+qYD~R*NrB- jyD_~] T@\aA guU<2;sDVui:C+#Y  Z t.YS3{׺[[g,q_̴Ū=t_K;Qlg76|>ɑthfffeA9ed'  @"IL'2d:a8/ q2gK ~*Rn}Fi0M>Nml:g3j/g:t3}o:i+M> X{f4/gn\|;8~bO6_|y_>ywA;sapg_=Ň˓?wvp]|z><M/^Ow/~xPy韆.g7.{jށEs6KCefU&}=8 -кkpP;uޜ ;Β8|f?|u {i&/17imϦ Wou==D˭Br k 3,H£tv辂E`w.gߟ͓U@LNjMC_\Ntݷ2r%hE{nՉٛ槁rSp 6!f4O@uS&΃ju[q̌M2oVxuqd~̺˗_~~07g,>bp?~Qej'lzNvǿ@mlYoN([sh>tؚ]| t9qON޼pyHٛwȯ^}lOw8.v)B'Fg-~[^;p7]|UIN?}1/.?u3K1xI-.[5wߙp|b nd 0r6.vdB`ZrC+w7`LCdt=+rpO>K\#8!QiZo.LjyL6+])$$٤,wl]rWoYND<>mZ匕]l0|xoyŵ$Tm.tbo9fN|cJ*L8Q M ƘPmQ'saVK1 )S[)_r73)oVD܊CqOV13t*'olk&AOq m`p[+\:]R~;@Q? p1eFgI,@x5z %Z=chrIabb:D:LD*bsN|]}|66HrYz#?KDkb8]˰ ҽ "Rq:ME5_veSq\6+"-؆nqʓ eݛ}kArZ1̡HCy'm΋S)$\Dqc`Jk&,4J$L"9MlY!Pg6y9L TRNrsI!6':1 XP#k KB0=FR&J#$5p`$iO"i|ē_1FpTcۆ;׸1Fszv98_qdpl[LjWHܚ@w&0 iѣaw&Wt3)5ZY ~h}n>v̓_td_XȾ}a! q6ԅCzcn@KDpjA־a Zj+l Wbm7`(ʄFF@==qmf #9SRܦS~ {`OK؆{i3M=S1TuT>P>m(uR\Vs:Sx1`KB6CV |Y# .a`xM՝ Y+,o;ι]Fxukbn/=wPzY1[}g dä^"Pei5FF ۘ%[{TW~mDbcԠ#9|GW{f vߙp|24X]W!HNMa:(vdB|pMyA- lܡ2Wǣ1[\Owv.s<&Eܵ6liRI0S uQPnԳO&w%'{VzČ~vsvY7Gfo?]M7 @Nq|8c?Fi/|tpWߟ?n?~}|gg޾{s|~6&)p7K?{yw|w_zq_{߇oLay7)h50,:&^g%we?̮x﮻ {so7Y4}«lп-xn'{+pP'1i8WN#w9(t[;ʛOPlrF@1?bӰ{P;4 ]4{}ۍ+7J5p n">fGp+|O̟]_yd&|.\} `y?>a˗&r$?Zj\T(jkhw;ǽoN|_z b|O:7YO3G|ޫ!jrc`&'H _0*U8dpxvu?|,.abw|8} ,My18}êpf8uBc<.I~i<ۿ- =p:{>p2vBr%$Oy"8Kǣz8^dSlw;vY+cXo|Ԫ_wO_n"pB}n Nrb6SgM:3uIYp^=!",{XOIfh 5Kd"M$jϤnT(%]dГg7+^!/2(Z `7{hޭ"7<}!ܷ~X4+p=~27੺+&WZ]qZz+)xL&!R#(W ][s۶+=={Tpx8$KK;cʒ*INHJdŋɌm$XXXD_ =K(sm1qeeIڀ{8?#Bϯg<MYSN{7^;?o1PdAo( o>?qJ'⇿|/_.2W  fw^~o35ڇWo{" v$fqW.W_oroStle(7_ MтU+YZաQw洧iNןVjӕ긪tP1{uix/(JC(J6$gKNi 곙(m]CnTW]GXnNMa6}4cKE 8}JGKќb:kKT:IʃV.U1;IJm}qMd;}}WgEu2`͈^&tlUMY0W"ՠ>b೓{{jfv9PMZh@d`Y՞J1@>H>:$UAkAR܈]LI *PAh8,;bd>/w;9x Dqe Ml~E\ZZg;NR|:v\fZ)hP6JhU$pL",FDs >0 -|B5'^+=x`! zrAzA8P( >W~Fϼ!Co S! UbYZ|@;|W9!(HF!) 5N@. FK   )2vR'Ro@n+b&VFhJm,qAؐD}esBNԽm "cuRk:$,4"&H Ns'1'"ƘY̑Mc&9Qւ$f,̧MeЯw6񞙩n۟3w Exh"uZz#ͯtSvS8< ձ%$61%Q:AKDgD@  A!q)yd1=𽌷 ɯ6>P } kѝE+-PJr ma1( G@J#`Jy*Rض"g+wwޤۊ&NOY-N=oV-#3[_^_i}"mܦW||#H LWCPe{t"C6Onοq]#(a0b]Τ}rqbkJ2LsRL\[~` "ڣ# 8|QK}&ќemrX qCM13vO0=JZ~0m1י{p<$f;6q w99/v%u<I9!:F(C֮j@IShVl=?nGe /MSo)$VXE<0x\gک|`.W}X5Jz Qͭ׳wY,lF5?L&.z4x;urEd-|mD(a(X ZB ;G!J AvvX9~Er@~wQzrya N sRQ" T&Q%VS(r(q:[#D \V:?vU# 9@ M*P<g@xT6qVWumὖL#kQDFm{ 9)T&Vt?DPY(J`}.,v͇w~^%۱q"ҋ1 R v?\d7_}Gs?3߸Eazo&m=LretcbX;ky$h=V_jp^qv heflh<)ZkNIxt d}G!?ǔθ[}/@ ySΜ|5Gnui2u꾣 w;⡇cAgܭ~O wkb<)z@L91?^v6gW^8fNQ3:n"dk ݇ wL%uc+'K;e"DTDɟ_(k0Q3b(*̛~d+$˧?.gnE2Z.gg/^@GgK"ss\`g;I;zrnۅgIOy"!h*oizv^|"@dAI$Jd QMt+g+m9;/3.v4P*i:^FЀB0ct@o,ӚR!ll^fK{1*XFd a/F&FKJҨBS77pB\} 8mDz,/g&K4#<HY0*wtuǖ2J̀]il,AH0jcPDC)c4H)$ FR0doDP77j$ɌZCb'Z,Tfl< s#PNLJC]o%uJ'Zx4҆ƱE1F5-֜H (H nd⯻ I|o1o2{s,l̘m'<.Q1ʘ8 |FO.I@a5=z[' J>vπPTJhX<&);m;#DIJ5b5pw IR#{ҷT //.p[;*5Z25 `>d|CIQjExOz Qe R}?|m:kbhZZ;|!džd\(f! Z~  0OhO| BګJZOX1, e.Mc+wn}p>fWF _ gc5!i,Odqmjͯ?fR|PCoIwCeem~sy+.&-SOdnS%mog'!@h/F g;Nr>G^.3G*p ܩ9a0-&8Qǖŀ"yrRҍݺGb`MW Bĕ١^hA`7Hf1, @l;DsH{yUy08ppz8NJ9Wg0lO}LRPa 'bJGKA D^yT dҹ 2Whu<eX?^esq;^ޭM)hadۯY 7eUc@Ԉ3 uNM@R1H c"8(I2s,B+;^r !mzF"ܿf h&jOҽ֟DYG j_+l ^@>xk$e Я0 $lC߫+a"$ !E#K-/p cO| rp $rC,bXa-Jf&vrc#$#QL%lY5zYE ÈűPPs J$6J{b 1hH_.<}*}I۩jSsw@B4V#L 0 ak+LH kUΘHqԕFfvɈ dNk}Z8a66SXHN1=)rʊ$ pƗ8 !Q T MP1N.Tȕdi%D9)"M 3 #!ђCr–yޡ(>fipc >N$sF(+HRqJxq$'P?#< 88CP?ȉ/ U#%oY!DN }h$p &5WjYeO(ǤJ={Fp(&s.mv4e2}<YeD*" D$Q0l5c)a1n8%*mUDdD{_/8=E,%SGc!eGvgêBf!~: E mύ;x⮬s"':=y{`1G;~7Dq 7Yg9 vv@ޕ7m,ٗ<`J+Ɏ9ʒJ.WxVT3Ia'D~` ޤyc4hțFf (Tl44c 2)TuB()4# s/*T˗92 pP9[|T@~" s"')`"qܤ\7/؜,M4I )$WTM.J楢'.{*OS 1P{IqQDvFEAœEKD#^ӵNEMSKnni=V=KpDD[j&NRG4ͭ#ڋ:&\Rwv4 #Ј C8 B$4lP>0S q}ű/O#쳦8 <<9 , xhyeO.X8E,q];r]SSZ 0U zX*Ag b/TX!yz`yb%ze]6eQ㍱y#˕c!k̀!+'ב?<魎85;Qf.SxW{zwg_].;XV%w&ҢQ7-2Ƀ[֥$:Je$Hݏ4?(.ev0")l6x:Qyx` M ,܆O*$+̄{dAS_pc̑X2MUkj񐳹#0 IzZQ]va-8̛^)х%o}*&^S [j%Q]xwՎxxF̡t~&<0f1xe74}3Ufャn6Iu{-AHD32u#|W8 #Xs~8ssL;o[rKMs:њnrGu-Jm 5a)qH}>D "c!GACJa ' 1-<<Ԉ quUf2HQ&1CRW w7w  @/H"`Ō]B1e{mAk#GiמS-@{r$](Ck%X: 8ހJo YKPdRQlfpW[]I Y_k/avIfN9Nxni=n+Kh4 #0_`2>owL0|U H=vy Yj<q8>eC+bl]+F`>-¦wآ4Bl1GI7y`/d{ǽ3v3yz몼R$&|V 13 ]غ=54K_cNf=^n6,/~ ,Enf+{>0AlMFO5~c 4XA8 +%YZ;KT+2n WGL!]2| }JqSle Cn&e~D,<ͼq4I?_/fQA arH Ac9,q:87#ta,PFٌL"pmFMhE`n+R>Mk>:%Vc4Έ2ږ!?)߁o4 V.SyEFRmͺhА\E EdTZ Tx223^f?=.dM;!2`Eb%\f%XMF(1뚏_1ÜG!4g]{ 6)%YHGc$xdȖ:v2g(67x%՛$lxW|G-({}l@gmxRg aqlqKMQpPǤ'Tj='bd%2ykhyZ0ϩ4wi~( 5۽?j}8<~5d4mIlzG8uI=!0ʝp ~ܸvx w.?_ۓC 1N`vжӋ7/^1߼>gӋ?oN^qս6ȎQc hg6{^G|ӎNw= Xݞ3n\e->/zۄ!)oB0}X;W}z~ΰ??Mm_ (䌀cn~g@-Ѹ]"vi]m=P^JI ܘ4>G(jj𙽓{4Ό؃Ļw€L`GG/R 2ߘuwNoY1 :u4{McSć3GiW>"CU{|kHҍS7]ۍWw'!Xg_'_{<0[>n&_/QIdUrM7~B2z͍naRH I2{(; Im?LNVH{a(Ktt}'ɰf};lRaQz}}g}ka~KyBv`LSMor}g^n=eUegBf޷ 3َ[ҥWy3ٞa?ӹFH˜x%Y7PSDբ2a!7M02slqeҢl(S\HN]L.r @O1ʖ'ּρ8&` ؕf(#}Ś(_@~-00n*)k򀈀1ա(IrşxVtb,C:~M+rD?|OJc@hmo g%ndGT f=^F#͓/=o54^Nu|ޯΓ2ˬ!j3N|L GtlG̗0l2%$ *!I@kF4)2 PH06Z Qe6iTu TYj*`9q 7T,,uQ Im4C Ls==i7^E<ɨbڣa* ݠSH=K= T~'&C_O~ML=Vl8lq+Zdp/n5EKTKȰ %KR([ >Ѳ-YK]mT@ثQ0lƣǁ,|XYtkȎ[1UG0Wϝ_sZa#{qgNid~NQqߣqwLn L˗QH>2.EriJ8:합1-~$7W, [N/͢^S$V6 G;|oGͮ) //lP>3 yGe0g.`~ φ{QM*?Ф*?nDj%v(K_Z!*`s6 G(olF洛ج6"2̓OtӍjMg)U4Ѩg6 Ţ,S,GLE5N1DH:w khr7ϸ~ew&%M9^oA韘ӝSӣ$$P#QLXHB~%:A1g(\^1 n$=ysb{4lv[7<dVz5qxzk=4G* 85F)|dDywl'iyufpyiw/H֟ۓdŸ\?%a7-5M-- -y$w.9ɄVKۘ+;#q6$($'&ugŝԩV%wodjDa%!TP2;ˑM4lJ\l" .i!iz&A?:ɸ ْX^"obN\JߝLeLN;fdBlv?Tof;1}촁`T3M2MvJI[mh2Ŵf$j~djADe'#㽻>= 6e_;>Oq!԰eG!2駜|;yC*ڠGko{Si]В$3΅s3>hRnNMc˳>qT-4 "9i-Bo ۫*YaHҧ[džp^`qټZ*ϑ3,cL5۽[̸wOa2д4%j$s$fݓ>T4-FX=a{hhM dH7Y%+U*]xlww(y&=_N'O8D3=:,V8XGbzۓ_I$ϖE\Wk.e ء[7 >kda3't>Fd ;(w A}y]Oh"A: ncaih kg _ІIߥ~6&yQ !zid j$9`G~cotȳSycRb¢&`u5M;VB혆*Q1ZF:#|QGSs{KER{ڴvږ0!E_œvzː[6XeM?2T7`$PuܨS<Ⱦ2G!o+=nΈ"I*M0n+ ;JIGn5# \f}D5k5;cx,齩nI7й[m>:wt=Z1n B3d&k{GGӄԓ>mWJ2%U>7Ra!L,Nn?tE#:-hާycFaIJqX)1J[/ERnM5VHͱhYb}"lQkΉY)K*(j(s˛\ԕȷ]qX?ŕSw}.խ*Z]'RܤSGhIpߛt/!%y2]j&'6ekq/&f>RpC?~Qܴ!#S/Pw%b,JtuXvf=(Kͯeh1{4ʸ+*=̔H1yđ/'=@J.}QaABɨt2jՙ|z9 oQb!rcq &y JHX} 9 FwdK-ŋ0\*!ݗ8#eHF^\iL1eQQ7i|<:%Uڐv@0u~&EYT䣳Y8{2e]Q(wh6^Hߌfy/̈́NBXu'YAJ%k. /S\5vUR5 IddX`!2X=1]aWY|êEV8J785tj6,1~rCAnqLtsSvy)iƹx;O)Q 66K;mkT5##D6L ҸqUjHQpR A4jF5PrWLC1\&tDr3+(FCEȓaaqi+iS44wZ)nP'j j)E%Rݨ\:`vw@VWv*k8x5L,V V5DH;{vىNv19`e ~> ̇-`:=6]hKQs0i%$`J?_?,(싲cj L4J+0hU)AzvmMlͣ\fv٥ys-rL+o\DKRok7Bv۟Jn[h)ڵHvQ{nei":e(1H/ 3/^#.f|6 YÎ-xq7G~? K3 ꔲ-ETk 0[0J1dz[qTPq`sIQ3yO+nMfj ~ ][ueZ-'+fPU@[:8Ԗ%mR@s#jZ*t:f=#|]7=zdl¯I.pvJ}1=%8bBJk[ؒVQ[ &J1*.ʖm<<,K|LUk`+c-,`T}R,ɕۃ!DuOJUGB*ɔ3J%My @` {71R?\;{FJO}c wWFbL![΅AYlOz8Ms.~Jg!'617`dp ''mqDTkվ{d+pj禤&nzIIŵs[}Gv*Zu9v*%9m fm8($4zkUkfZ63JLzrZ ":# '6Sm&R;V *<{1 O܏8FI03~ (&)+z!ȓ,1K2;G8@w!>&>PLxb)#X\XЍU>.Iۋ!)5HC&%)0u.lTbșT\ͺL9v+l#1󥙗)F`!bsFg?݃E\1*P y)1vJfu 7Ӄ czh&:CL~ :EsIA-vwD:[/ݯ4_9(Ib{gVvܴvhA(tx@UZSos [ޘBgƕah-Z:'n AP*0P+$tk?n @0ƪU)trrBZ^BWtb4>~?'PB>JG~ Eu4ԍ7Q NB(t)A(t8 `73[^Bڑ 9ve(w6_,űOg<+23#o<5"߯\MlV3Թ7d`E;o/YB!ϒ_ݠӿH Ũ].y-3N4@86t:_@X_N&ۛ_gXyoOl)b~=v(lv.Wsq|?0R~! sL+,5JDӽ^ }E1q߅I b¤ \ɿZQLT͹(q(&Wrgwm'm0?,Xa#J 1(4FGEto(0Iw*ړZ&s<g0m&sye=H2u҃嶲qr){ue̔s>H x }xWUKs3O~ʼX?q8 Wzȵr]GU=LWeB盵>{|/3smd|6 ߙǁzeoNfWtB;ϻ~&2`FʸyzUzT߿8}gL?h]3d@:2w˧W4݅Pm/E Lڂo]^g h&:nB(SToFn&KVTqˤh8bLfjl >,CoULa(k,qzQH"+ox |eg+ۧ12.Fi):E<%Ew|wQ.e| ȓa+0Nlc',dݯeuߊQAuiYqns[U^8sīΏ\]ms8+*1{[{ngJ&_v0ѝr4EQ $%ۜTMb& 4/$ x{ 8Zj\KU2p< r!HL/ȥgweD/\h4-@g.{4uV/\^ЩZr~eiJ"c9N0B̳$G%RD)J$1fRVI*5sx{s2Cb^m'_L`!­`lmO\ љ,i 9I2II.s'wXE^i9T1f$Ǡbb D5XÑ1R!&V[('d[h EZ*RH&*Sy.rp.di Չ"gyTYl@L4L&:a)!(䕽n6;jH}@V=*Tr:T)J>fZ1*Si80J>#JmFJ܆W `L@]);!Ex`=(YQx`?؁=\R|G)8tw.;188+Xޟ,o4] Ag ;v&:<~TF%|ׁ`՟h{_/ٱڳ:w+yMnNK^f0|v&wŭO{,b |zz{MpZqN]QJ$d۞l1M@{6|~'~[wC G){@GG]e`DeSM6~1‘E4uE҆eLZ f-O77MG] ?N$DZZ4'iuawJ܍$J8W ;eYnp);0D"tis ]Ly`O{3q?*ﴺ+ʓb"$:A|r P ACxaQ@jQ90)^s)^;]s? Tݎ+ZBXqHDC%w)!_Ӫo.lZukEԎl˟zpg0.R߸JY͸X,K*V/n?8W.g .T(89t1y,^V|Fs73h~q<6EgWG.EA(kv-6JAn_ޯ8^paCYTDYu!\EtbOoY7*:hB1Q6X^VZfs4=[U4Dt%X-릠uv@ꄶƺpуR5u SiݺАW:<aZn%];etvfb=˻Kozp⎍Z&Eŋ/'kFibz3fv}H2QpiKkܖlb3k?qpm}%lTЫ8' p,TSnh\~F|dS<,Wm*]O(Xn+wbT)U*TɆ«q:U׊".ǩ2TaaN6Vup'N yƩ2T>"N સZ$;ҮJ.O]o6?5KfC5ټ,G5].M?p7l;MdM6(mol1|y?\m<d+5@RQmǟ<M*\Ug?lQNP+!DS3F±10Gc41j%26xnjѫ88\0G5' bq^ϪJS?qt`Sz9@GGK.G+8cg8 d21bS6Ujs5VeYSGq1ܸt Z,1-j)J[ ٣4|Z (`HfyXc5Z#puVm^RI=ݼ3?fE:1Q"8#MʣyfdŴ 0:E#⋩' Lf-\~nCDks !+=QM*C%Q>-6e%"NH4u3[ӌ 1RZ LR'`$fr2 Pe 2&1L] $ PffH0`HQnܶ}=&6`8JVOM=pf1g<^odjݤݤ֜bЍ 3+N3W\IXc.aHV<02s~KF!(^h|hFj<1T2Z/c6 h-&|T2%4I2$$7C MfmW \]ⅹI^<~0nK#z$[k_=EPE֏_ˋOt=M/k|}w~B VDO.Vy_q[ड़rXO3Ǽ (532TOOEA0r`i'77]n`CI38 I}M/Bs%H 1۵1C Aj笟1bi ]in rGeoEn^t0@J󌞾<.>bc3Ar^Oo&pߢ?מ~ \3Jڸ7DNlå.AP"9ʒ%~JzoSϽFZPMxg G G 6ž C6 ]n {. bP͏HR9k̗(6|~e~C "{@237̃9BhHnф6@2iNp8ĻeLZ f-OvUkt4y?[Ȣ|LjOoqϖiIF5-.N5?=DQVN8_3eX3RsHRFw@@)/YQp\Tp¸]]NK p g0W@2H (RD<|ݹDp"QnK ^]gP^,^Jmٴ s% Eq^Z;ߢZ ?')ޖ\{M)ll X"&$^OV;GI.-]bCn?8c.gV98aOs@`e݀˳[s_|dC+R7ϵ{vYT(dWY3En^~,*,4;ҐWA:{m)9ꄶƺ.r2|0iݺАWA:G[֍A[(>FvUYk`֭ y*Z)S,0N Wq%S)S"UMO1 m$LsZƉ`'\I&똰$ ijEy1NYG,^hlS*CMUDZsb I!Fg4L`Imh2S0uL,/a&2KFB &i2c, ُcL!QҶk ao0. O7W|#p=[G\sN١ldR 7zuBSihű 223Y H ڂXL1&Y2 J8wL M,5HH%. xFKu.e#ͅBpK m>rZ߇v %.%gDˆKBy+iQLiEئ `OYg\A ,gͣBpev˞[Gx"-VvJG40F?מEOhF5.*ZT0O v^V2JpxFMݜ/+. ӡKIDKoڇ)vChGq0KE+p"|8v&pMHp(ʕ Wyo܍N[Ih`nP0Y#8T9 H?<0d7FM$,.|ٜ8NsO65g'v@'ivCz-(+q'zp޽TڢE~' k;d{5Me@Kg(>ma<[p2E:Kgxمr )Vj;VmEP |T'6q)κ_n]h+W -f[(>Fv%l0~!Ӻu!\Ek:=wԘkf*k:퐒d$9Y3 fF$Km戼~.[t1^tsMfx(d(EPu(׼[!T@Ctc 1F9LPUHN-*#yɗ o#s`Ѹ4.aՉr_Ûep~5K{D'+9{HeLit:)060G ,{Cmhfԇ:k!+"mSg;]a97{$>w/xjP[6QlQY?fu7V*K>n*wzeıN/vlۡ#$ӎ|D^|%6l-u#nDN>G6Yc_ d˹8(ۨ|n0sEq*|J :ȱ;l _:>a_722myvA'I0uȢy) 8ψx{I$cی v{rPߧ_}/#h3;Uck{WFPT~ecCy_uɄ ߋŜ!ZB|.yB48*L 9 A`-q)%PJ)j[iϲRG*@i彋Fgx5}=·]02(p˻OKiFн@_4>"wI6S,Eaz Oox[#sICpoAi4 |K/'_.;G;_qY_1H0ۣf\/)'^NeíxȔC[b35*/8NEIR&C&Ɖjibg +酐(:NCE1=`#/558e`盫ua7-QqQH0Ba6c!h5nCWEC ~{QZ2 bpgW+ߏ85Y_T0xblAH#SIΞYp؏SMX nf`#6I{1P<⑬Ir_/4{[|._vstd|`t2](N?\a8N;'bXT,=ͲF2eB(o`QP $[sKߨ -G{7ƃ@I_2I>9Nao!A/ *wΰ m?g#M8//PR}C:zT) UU׷2UYT.,IDgUS;nntNo0 2U I`,2e= >r(ּ0W&GQ'I`:RvؚabHУ^*풗[QB%=EvIT.\B%x'*\=;g 7O"w{, zm, mQ{1q+A'#dp Vφ;Ջ\Dg,1cqmHwoS*`?5X8wQ=?Dr\p^\g3-n3gT:Ψ[C|q+doiى{~p3e)1ϻv!2dHkp#GOޅUsr&L udwe|ws&Z%5UIkij볱yo, \VU 5,dB~Nݻ7{wnz5qMܓ:)]rZ.rAkg/dkV ;Q0bn_FPbu d=juz>lv K,yBCt/> [H"U9\I+טJŤ2EQgR8 @`Bږc >@P?0{*=ށe8/jYfDf"&Xn)bL"-i ߤsI&XСwJE,2JUf:e(+FYOSi DFyKXR *G7(U-ם*&dSũ "nq蕃ef"j3"3j*w6*]abj3j{k d+^_t[6SekSR"ܲ)wȪ#!p~\˱DӬ 23 CZʃ򧭖LLYc,2a]? ɧ3d<Ҿ,GXeSJ3 >`kq<=-LvQX;,\)Y$TزXx-Xs0e(ئ=̘۫"~Qkm2v3A5%A%lY$t(FY9%`ժƾ$@&\}< Tzps̫7XkQؠ\IS=Y+M/n<@a| |@Ej`*`_&*09M\v͓\ ML|O}( mc>t, -|$#eKaH Tjcr-I8oR+𛎅y=0 ]|*K)3u)3_EN/F~OR' b=ULFc8$ElsF@nKa<"n"*bYm<[昏rn},Oc\jXE2aV3i"TRl3-ms8cwх4Q`- !g[P[#KV)i:@bE88rl ~h2d{6QEÌ^׳L`t G9n([Re#+#n (rAwka]'+:&n[w(h:`dۜ:8gk9}T ">__#iFC_: !+!!畅 /gsDɽWɔ$ 0h yOr%He:rV#Iu gڡyhnt 8V0H+R!jX]9Ԁ%$IľcxhSpUM#V+BfϸBLZbk B^%K=$q|̻>d=yJIwb>'r'(^6rA/o&͖snTi^qG֗if 6TܞvjQBT/8PNVsh ՇiMgas"DFأE#BL&ӄ-2s( ]A =LB4&(H TX"`/o.E׉+qWF_\6:ؠ svt<}n;ę?ćlSm%2W0yfo3>||T8Nb3-Qİ͞6{bj՜.GoXKc;-8q8%+:NSK l<޲-PxN6:3Nx&-$FwaPX rxf>HO)Cj@s xm0ԇwv0oB:AnpYqY#ݞB7-MXr *N[o}5U&; %sY!BߚB'X7Du@ =NDLn(tB4HBms^+ЉoG)=N5].T:/ B5|j.AV77ImvTiv0M:+|u)ۗLo }7|LQvlV/Zob*:wM)Q)3oKʛ-XS`͹jB䘖Y_]-* ljÔ0q{+&z VN(tL4%JymV+ ՓRB%RJJxR4-:ާRpMvpY09ٱ5Zz Ji:;Щ'QT2/[o,\+*:G2|;BW4t 2 4jjZ>%+tEZAO,YU-VQ1ZoI[xcŶΫbjiE?駵C}0UkDK}HVSXs.3z3[BǖB%Q9^jndV)thiaRuޞB׫nJhWBHOS$ӷ g':(m= }%~_#nO3M0F:lRU|絥멢h[&?5Xl{DAͷ^l+Zlm\iʕ)- :Y" maU_p4ڸfmvl]S7 6YmW1SXB c=!Pة^B;x7jއEwB}XT@EkM*vC, **5 2*Wae0i 4@)dM:Q1E*z^%G0 uZ8)y8&EF"ɮd/QQpSՈY`ZRf_G,=S5;F>\i,0F p˂ ī^Gfwx +,]ܻZ|w/P21-@qx roPFFsjrC|yWi;gXrA8Ɇ#bh8L o$u.s||~v~9i% ǡICǷOR<|s?;u~t]Q܏G{Gq.]w/ |'V,HP-Lۑ4Q4NQ]ʄ6Bb-qW^lW w%KIr s!Msr КMUú_($}c쎅x׸ ]V>j?D:Ao|6J Z`߽A kz0{o_Y~$ad;m6SXV OLXޟIsY]pVEOy8ǩ|SQY^l?ɒؿq>fjU<z^h^7N|u @3_4kёᏏ0 vIã88r8䟀iYze#dʄP- b'Iӷ澗(QZn񁒾d|s  >~Q\P3tlXn8o !~X\)~җ[.יԳbN1FXA(lxX<{2Bi*˅@:U~U'rcŸ م b*snXՐ9 "SfE鳝? }j?fTJ␺n\a9"Nh'%ٸFq^򡄣K)r)p\RPM!ඝ0ԇ PhKҨNi"bq=5 O|gߏj-9ڽÐZ6vA6@:PE'$a;-y&I{5KRRkG;g㌔EvXwmG!hVKdU9ʔ5;;  gAr}1鞣!_!T! S6lH2nc%Ç#PhtH_3_uYS|Yh\}= a{i7w0b- ],9]p\|vN?eϏ~Ouh^gt{r8]??~.o>-Eެ.|ر<~ƙGLMW˫W|45gL!xi:6GUv#\tGnגwQԏrQ'A^OW.+7\YuE)|\OQismڊm#\P%THOkb&JsIs)4 he$ͥ\\ c<[8K\"λF]G|Tt vqe'4(R:|EZy zG1p c,: zͥ\FzDdv;m eW-H9l:f[8ЁtEz AG :exg֝3ᠣhʠ ^u䘉mʀ](A3ߥ`z?מߩ(pex=㱗`2Ia Yt=|B0Y-G,Ls[IrmT^y~~ΧY5NB*;,G2J!o)F4z;[^B۩#`/Aq+rY8X]m`oNjIrtꅩr4l9l'yop ;L~K4-Bx5V5Eꁧ[F2yt)(!sK0r`pC\B@(}3 27?Oo88Szn4-6f#Z3H* i*$`8Ls-1dY_bTAZro*" #E`j9󚏿ߝrf}UD-U XJ3YEb%LjN-]jS%7cŊqΠ9eTTnG6 ܧސ^BʛkZR;.nE(1<&1"w >M.i,nQ~REE'C11}*:WG=嵼ׯ{r{a.:l[^V 1c [nTE#Jἱ^}}O'IRCdUzYtSa0+ EQI \YnL8s\^J|}\-0^(/ϋ͑}s|fy]gFMhu*3BDjBLiBJa_ҸxSuۻ}?[\,Kj4f88 KH/e.!)lK^!@Z"gL5B4S5KF@f5q3 e jv KͨF/qpNo;UiMU0&^OiTUgwT$6+\B#OXTf<ထa QUcK7:$Ef N V8& Q)J* 0`̝$Z9l5S@"x!U` L3:OUCX3H)d3MoF3u|sDv.yoX_U`B3n/i O'W70Lw]0汴M1dϼOƌKOЛnTtk W:p(>m`~f"~ә]b1< @ ߾eL-l`F}@!׏w1Q>^&g<^6~ 4~뷻%a R &{ڸNKv3@OCf&#/1 QVLJ3BS#I{LILlUADneGޫ'g!薑` ֌;O@/~Nx * B7#nw]Wt3T^4 "I^IA`&J8aylW&fńZ#%PJh-Zd 9kBL5N;bh4JZV`&cEJ [\aD&2sysKWP[HPJ C>)2PeHDj4_ AlB:l%Lcǹ\1R Y-A*!^aC ~f?|H'<<{ 8&ߧ5u6ĺtϬ-YIy™8~zUq+ߟӓ7~ {Ջ/() Bço=7/k >a1@Ed2ήo렐ߙej2CoHr{Œ%C9Z7+e@p W=D5 %)=F=IF9cTNBsmv:h%;9d h֖NI*=\ l!"/\SQDȻQ E9DΨi0U-~z;~uhLu꘻=AW@y R2@)Ø{_(֫ GiY 6IIq!M^S)Ѻa4ݴ`"jKV р#^*#¯dOW{XHig-tp{ly<B,Z|F+oɧ2? ΍ڟD! eDž5CDL>kT(KDIp>jhlB.[z?[biSt \,7xH=Iiݔqu:'{) Э^<> 8C=\BYNRURH]ta4p,=[2pЗ Qb4(ǻH]'xCО6>X m_=\B W@Ö:KUСt_ףs_eB@@>v " ,9#$fvQe)$" E*ƙ2ZL"dz|&jOvxeσ=<<=\Wi{&XASI5@L4rJF8 D@'ƤCSt j@VСn41@RXyȳ^[@Yyl|mC}۩ח`+IoiDw XeaQEE'$`rH+Y*H$X,SvJ!̘rT"b?iGSG1Ä+5jbP^Y/%*B6O jE N%4+M#œ mTiNK8JGi-}!ZSjuj7(#(TW` xsޘAN5hsԯ9tw+Q"1m"kzqf3in&tZX;k` 8ce~GR Ljv s!.c`(e2/PLttQ$T!*VX{TBrI_uk9X= d߮0} k쿟<@/3W ~<(yP&A ``IRX[M8&W[J|@+,qCR[s;WYF G]5Q[ܕBYTn|gv  VU?1{.F?{WGٙ*^``{6~ځí-I,o$e]˒ ðJY/" []CbAl$Fv6:0#!x's ,0M<45=}bv )0&1ܳ-MMElj*h#Eslr<αudBP6Z&PydA n&P.9Kͥ`>"Ja |kbQ14a$ԬV1ᬩ. F"ii"N $~l4qԊ~E5 3ҷg\"#8:gk)H>9G¸¾kdsK!Ae{JabȮ%;iveCV{z7/{_;pj (t~GS- Ȁ yK0G;?j1 V, Ma3tzy:[-͟+FN6P7G-'֟N"z^fkMXt6C 5q~Z: ?D#>ZЧCNagFksVb4/HfS~qP"7 53@%.ͨv:JmKvK &Uu S9 z0!Uz* 4$\۩. OĐϕh4{/KBcƒU3s"TrYkauYQS!JŖdmB8h]2C+hPރ`}NRjbJ;aPL-?'Hfl/Og߽|Y\^2_&t3WB#Jˌ5^(k!qF8py%QO*YPmzyC¦977Ah%%vĨ'[Hogw:3_xf6&5ia~=z;n>z~nqq#+~R-"nPhۍ;>{AcsҽѵzW ڃ1j(n͡,ҏJ(}ƚݹ\}x<ݗD)֑-e]AV~? N5M`acGXGUø$䉋hLncjB DYm8~SNntS[EHǹݔVA>#KEfd8VdցFV{/k$1{S?hI2“SuQ]/~ytHLyyh%vtΨ ssA!c9hP&8}7 X 9t]LAe"rn c?rV֡Д+^g4g$.-g{ʕ``CpdfmWc$xUفvS< ^BK $Oi@9c|7J RЮ D w?3gB2% hoE#{&kTqV8K+4CE͟Pa f@bdΈ#}vY0͞}W% 1:[hHŒt$v-TB W 5XD{x}PE襄Çii)5C{y|kGuet U eE?'ҰIYf tj#ImS-[0XVPkjhѠ#e jP14-4M^BRS(- ް9%S#+/w|Kb'.2 ${MF8v+ EtJoVTօ#u<y}{܉g]H"j;p@hWi:lB R- <yʇnX=5 y"Z$SRNnh$ 1HJF*|FVha/W|$䉋hkpFMK @UĪ6q1\SZ*| ts(yjFS3 qjFd9Ԭ04m;Д;KM_BUt_&^3G;B0:ᤪ O$s)sbRJ(!)u,e{@YF_Ɔɉk&ZȽc'yWBu蚄݃]RJWk 0 Mj3V@p9 9PF f{)Y0zzyYzjXG1eȩSJ(N@BU i=!4IuHxnq Ҍa/y)9 (ʁ U^&gU\QVuG ĎSBWqܫq/ٛ2:/'q%s%D Vq(SJ@"3)բPKk x@1qV~/jP4M.jZ$mK SiBA$](^$&8X%p<3V }a%ok@ S7T&eDH"\T(:W6*GN8)a^RJE\^˅g i}g )w8rDڹqZ<0x2Xx̜0Y#7oyW p&,+~ :Z3.>ϜX7oS&Q"4ԁnNu*E ed뀨ln~pӨqgn:Ɲ~GU4Dϒݓ"QX~ {,.Yj{o Td%` tʒNΣXLoŒp% Kw Yj`,&FU6Dn)n2"EѕWoJ#f@B)tژ >{)Kcٓ.MV?=S&cO:acZNXtcc'ݘxjdntl2+O`?_~mr3wY-{:+7ՌNKjFUV3όtY큷_"a&IבK$zQdf:Cu=)n1/^Vy "|)Vv:(.ՙUg./],OmfQh7[Y~H0ܹ_i_M ٌr x ݊ y:ݛ?{>0bB@5s.?_3rr&Lז8qnV;f`+|v7 mQn$eJ_R!a1,&Y~k]M,鱽s1h,Ki8MANP3[Gd+[Ԧ'n>[>X1cVUlEa\}8\\̪XVYGEʼ.>;0SfkINrv*[G}6=UsF2|z;j<9Ht~ xo6N`-p=7Uj:w!Ñtd:"=;6F;ԴKR?uKn0WsMkSCE(qe=4t_W 0VmWw-鶾gudV[e(Xջfě^bV lV|'O E^CCUgQDz5MOw`9o( ၦjF[?*Gy?8Ԏ(PY&82˧0ҵ _`(12#۩bgؠ JZb$غ@MYӞlMÀ^x.蜽o- dF^l;=Yu"AL6 BOo# ڕrA!_bÌ]oYQ^C"7^uQyxe7)IrހfKlY K :NW5#_P(sZ4+٫E/Y[$Rx??;]|p9[<ߌSd*~v(n6=vtkcyuC6.ܙ_&gg(*΢Ϫ+n4b{ZPUy >ϓx*x+*Z!tUu.'C^~h'K v9;'wc&q.A-j{HG]-uM;^r(i' !'h9BUr .c:ql% bǰ?LF0YFW+)Ģu,[zeOˢ3FQTc='ާby >bk8] Ҝ0/i9/ܱ̎;,WP;/W|:^;zgg;?<9N΢drw>f4墇n:% ޥE+.9Xkh.  [Nrύ_ 歝fdFYw]__@bM&RaB{⩎$,fKL8dTQxB`Uj+e4;Դ&ЮbXQ⋬lron*f[RvG4նcJ_'yކl~Ȩ`o5E7n!7$}q x4]1a ý])Cp $C ,Fh!0*F.! -capY2Ї4@<<E"<²f.p6,?b{HxSĭE'w.9Ơ՚@#m|s|T/flyU%ΧN"L]YowRe3ޛJEmpԿd.:1%K玮-""'-/'Mq&]ܵy+Zc;ͷ=j64]4$" JK_@̧N"]"lD?_A]kpKcK֚p]:HCdGFYd[r1MXTj0]r\d;ѱ1Mr&U ]oLNS!<_WSّGdd13pMFS6Iq\Zc*3L6HkLVSA8LY1&` cv ^Yʲ Qݥ\e\bo4i|.$cv_<e>{pzxI~躒N8<83Ģ;.̎ EeVWu(~JፕlJl?"2$gZx=LL&Fzf=z{ i]S`΀5ecY*T0bvsۤƪ˒9cMJ ׶\{AI\B4daɁfAEZ3696誜8tUJawct0w,ԸQʓġ TOezF=ݍzfUE$?d!=M6ڌۆFCltёD we*sSR6|IT4$))Dx7{S<%eæWonLS)m5p|q}q<ɶ|qDYU[z`m%(˲B&LыVImpgb-9f*i Iݫd>FRIQ,j-O)uTMsn6J~z&cOÁa$tB,s̃o[Qe[0U. `c ;q̨q\ق\&5KeȴG3Wa@1(xr΋-0dU}-Q!Co2&dsc/[pp"3xD?TdK#'o]XBa֖DSç`{ziuASV\]Cu rX& ^+80qTMGǣ{\\{keM5:-:$QmiX]S䭩Cor!&DHכIZ*&FKڶI C.xtIQKLlbcyj% t,ixбdE::t(zF@UCg8lqOPm: ě]N+@=rAS_SzW B4I$rW^F}XƋG}8Ǵq/JԋՀf(o*{ }z-#~?#o ޥ@*)** @Z΋{#w@իYdjCO3ߝ_AYt 'gQ2;3rI7W"M ʜQ54OaP؋-'ƂANt2OUcD%{q/Y[ٔ'-/}|`.9Į cǔi&㪪?*W+;"NyG]yƹ^Կq^:́X|O<Ցp l7?*_wԡH+Brq7r{–Lc4)yI:mʩC(LØip{-!6[;ktc/t4HMMl\!7$blq W4k>Jm>+#`ozoWB+! `eĶBbb( ¡FH<+4L?J%Kx7C fx!\8iN/)h;0Lb^DZ~4z-N\y+Z×;ͷjGm\4$첉TJE* E*1J";Qvl{Orv"zQ:W{!]R^Ԥ7,;n5+w0i֘JB_`6*LJ ͂j>9syϿyNM873@-IXͶlًZ+cɭKX鿌t,ubb$}ހN$? jП$]S:rNp,M KgGJ%QRA)#P}M~{~4\҆7$Cs5a>UI%6=Cpt[e@>=6aRW'cZ0hK -HTKӯIlr iQۧ8RMh#`:P{ǠP|.kdSVo?BDek/\"'f3w I-5:*Ҵ IL1Ky{nӻ+(Cvs>ЯJf ?9 i؜]{XUU!(M dyؔ#}n2܊"6*ͦO/m6ݕAvїI!!'#]_J>mڂ?#. w3˭JJmPz?vx6/4]%kAڣ38lպ)څ}1?en2GcA&vp݁rךI3+p"AhPĠTnc$#z˔G%-?C; =7QnvE IꍂHu}9W.=QH(D_?"C.nm}Nӏdf]@زr였."]yC. BG(/L'7Wh6LWm #uSxer:D8O&}RS%.TMCHhd#ntTGW1OJ\$S&|;U( ~K伏7Up\ލNW) 9S"cv#%YdE(|{lQݸ_+^ݵMd6Cw:Wy^w.;ݗK*غ̅('A2 ܊:AFKlt})f V޾v: 9QZ*֏v[.hn-rR6)9!A0\\]}epVknm!n2΋qkfsJ*錒4c_UbS7Lޚ` lv;XⲋyOeײyyui+Wn?CexRQ0O" (dQ)=j,*,1 VlW j{UժiH7=%GW:[=aSow>me/{́ePu=,<NbLvn.W )PxsJ&@ҕcL@W #aP*G)U32] :rGhqr-\ƓsBtR1I C&.*buD2):FeiQ{ٶ4#C#$eJ%vZ4)s T k2i[PY8dbC E #!Gwr3,f2/b3;'Kph_VfjjTt@ieZ[wr]Z\@D*;xV=*GilL JyC]HE` 8D:t I*dxcJ2Ğzo *hk޽ ?5z- cV+t,n+7u8~ST*xk %60(N1`ضwȋ]p; Nw.{;(0i_gwnH\;ó[7-~];+W/%eM{7:+ruLyXɋC N@}h=哛T9?D;reSt~D7.[]ĘNwTA#z_^=t&nWnul0ys/ t꾣 sЭ[v y&fSEM ?6z"oPund_O:*avv6Ls\4SjwvW;ys=N,^;:'yxu0M M\ٮhx[&?oġt} Ke@W>-d75ZZ*1*[Tb3KIu™:<9 ߵ=Oдzs9Ԏ;)/b[. W->%͸9CJQRf*AH<xHN%RBjk%9meIi:(m>i(N'@C %0bm,dT Qw_R5 E(1g|0a&TЈ0vِ!h⠏!\!Je$@<6mNɱe-AaE? b4.?uObk]^ vyl2e$&.w<L:,CH( c 7g͇B*}`qfNNTiq8Y[u QJU%Ojˠo((pڌ0$I9y#)0:W!1ąJ"0!VXzW%JtU-?jP7x1Oa2MݚjY̭S*0+u89KgQZ4()-(2Iq# 3teD=VMj%SB[Qd" V/b.oKVȵ́ܥlX\ǮYH8a)5u(9oiq\-H3R\vEYtK%Z,WzsPh c9 Ed$#fb"T(8Ԉ{ +-.ﶇzp?_sm~yoS L-Ȩ Dl22*SmL$BZcǸ= jb (BkQi\bxk?W6GomnFVC{tOzV*Ua#>F#ØWLO4OTRxȁQb'~ 1K ƝV?>!Z[/Kbn!ۣe ܰq~Â1qkܰUE$juEn]ۻq Z* 'kc&-j{Zl4~Wq U5jB0 GԨ@N۪RM8YB4b-֏:X$θ:C V`]ݶWHdm /Eh%nk{j^hؿ`FtkC3Mt35PrWzn'֔A+]wT"N<z4JX%NvItHY'瘴h̺:9`9W*6Qtp6$'p,j4IJ!I ~C#Ix"J֠;%yhRB'Y.PzT£]TT1Pw(%LJ7L`IͦnHbԵMU҄QԕרouLiѹv rb)q^6!-[:W1׏_\ngnQۂc⳵nnruf{#)+ sй2DNiI$FrIcbu LڈYIOP)xbq4w߬jI7*ʪ%q 6(d q<4|C+"|+|TNuHv' JgNmtIGĐ8ѽ~B/)?fVU[5@ґzd`"1k?i<eu}|go <\F_}Njd#sΡ7uZ/j9K|B·E3Hqc2TN1Ā+U D C# OC++r";^Y,skz1}U ?iôg;h]̧{6^B''hoퟎBG rOd;[ܸ߼];||ys[;̣O`ܶ/WOW|o(iwuiiHw/0ݙ})jv4=*hz4uLY ¤`y$d1Qhڳh>^ ,iG0p1<S*_Ǘ${73Mj8 JH?ѷNS&v Ƙ1C[μkgEe{Bf޷OIVҸv!}˙\02ۦ-e۬':=ZydCV RcA _ͺ"8Tb "}$HS<oC8V(G&kxsuVt>q %S]YA#NG(BfEed>4S$92Bt5VdGWOeݦ ûʰ^]JR$c{*Vd'?];w!.R*G)∿\4f"*NZ|xvWw(X6(\7^K A'm,vzlv~OQ 2Q`Ծ\i<#AtqE9vL_ %O~ߪ/FnŦl GZ\6Mc*ֲ0(wL:LXх&w"#k.;S3-L; #-/?و:+ =m@D|d#'!ӊ&w K 8(ۏK HQ?_9pX_ת%{q=)z}G`ϦoM,/#I~vY-H{MӋŶ(t&8/ϝnΜwv0/mܻ]DogѐE*2 2@g|kB'z,Jzc\WӾ>O'}cv+i"tnG?OrVoW=98]|SRNxuf#2$]4DrٻzP`jl( H5M4S4j8I\Kw;E5?"*wߍT+t=_.&  dQ˜+F~($0?(GBFTQb#X)yR\2& b6P]T݆(:K#j;8ƶLc &nqZs8 Bs-;RS-R Ln+XbI{2ƱxGe1ٶ4̠`K(LH,1HԈs\sG:\hEv_h9r~kأ7 o8`;.|0KXހgfYVta3\2Zi< 4kwYDQSzԞzʙNQǤxG 6J [ОHݦeֺ]Ú8sy%۟j AcWYm[{ribs#юiIcgϢal6_A*EbF:Bp[`ZPpQSc+ehQ" 9sO l1P[IkV>#JPfm3c9BB"kk{V\ALQJ i}N̕}G(7lIfFxG`tQ[ ‹"(]Dt`yD&9A1Π/Tr@o|aOeo# Ak'`Ȳ9 u_^Z a.wxH\. zpa %^ 2lۨru4L#`4(-q*L?If-Eߑ kGނz:;LY=IJbkި\>}+ekڋOkI`źbeN*IBs-$SLrϺi)X4pcݎf4ns[hHyߺe=#Z4pcݎTڬ[*֭ EpLf[F:B;*v!:E|bOgҷOl¾HZ&iy6vrQ8=\Xn ±lp 6<];w$U\i*Jjur38R)}ZrO >xС`e $"QÒrtV)d5AU;ECfBT={vDqD1 o r-͎>H'bN4nЫCg` e;Q`đ7FUTMM6cٍ&e5e#ɔnR6N y,ZA$9'{!1–\X:rI|׃cٝr"R3s9Q|#hLEP@4{^@  *CMhunefa  LH$Zdмt[D Z@`^ @֜TVa:JIY* iD} PC#5D/4Pȇ䚡ੇXy!j|sQi8QɛSm+wrrDv@ B| &Gt(Q-IKI&pĕ0 $r`85s֭&"'`x\ v0}ɾ ۃ0_;/`b'Xr6t&yzp}<dY'a@3 Y fꦈsa8sk>W')TWoϛNHC’Lcd3yK2VjdC’,%9 V|Kjhrtj:K4\KVaE\lr %plM)-(#J9Ȗ&YPζpf;YfGh=?rCPxQ/00-"c$f :MF?c'YM#߬N987dyXh j@XrF ǿcVdtOT[C&bOL&$`\ ^K#S8'R*MRB(CG2,@0@\bŕR8`&Hǜo!䢬}JPjꁡVCD|F a ' bHN PcD39 f8-ʉ3Cbw_N߀`ͼ2SiF#dG#=&XKgFڕ- i+.KmInWT|%CΙ"0h!,Z!&8=0x(`%>^GxW@MM7n5yZ[SR(xm N#oQ& sLe&JGOUcP'AFUTh8ehܨJBsmM UE܅9`\7RV Wb*І8ԚB5p}R7PS͒T*T:l "I>Uq[֋dY 5RpYY AUMUpeaFUS 9+ԨJMb7`V!H5R`{kT6U!NP#Le7$M{DpzNP9ZC: [zѿtڲ##Igٳhfq6w/. 7pWIfκ Z`Z"}$SEl#z@8(}TȜeHM_%}|k-pDebM&X[F(h3鶇1ÜG!4R崳|0K8sVk6x8o;o3,+rd,p 5+"q[0GYf˷>UᎼ7XFUTpqRxx mڲZ8UAdIZܠn\ ħ,@!IDX2IdcbQ814h.bdO-~XI>Jҹ@B:^aԼX%gUfr\&oi6R^I5(q'4ҎGsC,NʂsFns3M~6SMm»(;C@ gd3"գgqI7#o8(ț ^.UtYb @ "O!.|x g 8h`x;L&E8 B_M )ӡfIhF2H% U*j-Dvwmd0UssA:kASesSE)[DnW[lrQʤ(bVl#T^@K^b Ԁ1@hKP͖ t-62T+TQ.:wT(pEds*Qeb@ X8I ;  w첞[h6ev7UIu;uM> ̂`h^K8#Xw&% Oliyy͝ӻd2i% }X7v#N+ע^!=7 !Cl M5Cmc/80'?M[+&-[(ۼywo05WVoT]f_Ͻ, maTef>y^u{C^+ڽelpK6{>AǢ'80 Gnc!|}kȌQ0RDڢU{oBOk]?κ@i H{*ߍ;{6PkLt|:ah?MsyQpe~'_r'頵 I/͉ۯk8 ?zutggo;5sV/6f;>{+鋿ʇ:z_&.faz4yWi7NJsCXyqOĭyOQp17Tomo&ЇWYsStۭB8@h\מٿE;i-;ʛOArob49 ܈MN@-n|ug{zݴclV(*WJIXl/4A LV/)ӏN>m g&d߿̖#~`֐64et` zwm zZ1 ;e<=r^A|_ `#O󬓆7 HtPm5QV맙~qy|yW @·UׅY9+cXߓ،)Ƿ^Ju?*z*/)8+Ԯ۽nŏe ɘkE I`N- {g{w0 {ZݒT!Z׻CGQ(s^GfCD=6al }sU|!Qn8x/:ǬyK:&J¯G~OT; <ڜJVm.KځhKj &!8pE$lA*gy hKv4<O"/H0xII1"_i~},(2KNsÈEhF)_M/Fm1D-흞Ui ޸::|u.'&Gc_'hP2T_|߈_d[{~@Y>帏$&E؉(ҩznsN2PT:nNe8l6c!Txs#jjVKNVZbHbUN[2]+Ì {33c5oЙttf yЙTcZ:wD)3ïй=mmVx tZ%tgc1էTcTGL29RGy` arJRc@z;UP-Eu Gy2gl3IN8HQJ e6脻:Ƈ*'8tU'm@+2jVdD 9&}utT+^qՀMB[SR'[+3M+4}s].8/r+;\,DBm@+2m)sk@_'s('AbRsmN\Rʆ-ChIBTuvAQևYmq~.:n7al7Z)wН}f[].t8HQH7D l ^ik@i@RdfWRX$m0LA+( 3<`XD\ "2>,JP0@NĊ+pLN85&˰qls&N؎q6nL앴wݟ-nNJ/wJ[)jj)02Sufxߋ˘^:.0{LcpqQ  6!*̺0-Xr ]-&{L>^$h)2}n\y1G9 o{QlsͳLVx5Ϭ@9CK˓ϼx/9]Ӟf4|U:c<.8rc7ޱ,U0r+`h;n^$Mޭ850r6xE? kqGϚ^oo;#XMÕ}hElURPn> z)j‘$7%3ClXX U ʈ6}xhLǍ, k"gYGRri|WVTr<9aKLVhŀ4 >!(Q+o{AbY\O>u3n{LD.,o!?:n.fs.7g$V|`#,kn`@ՀN@D"XM7nnŗ!@PPbt͎u$k@_GN١Uǹh~cIkt Wp%%t,[%ۀNE!QZA-')r"Ek9ʸ"WtG1Ե};.9YP CeSX m$vXniJ1u"]tX RJKV]#h}6'ڂP}:K5\נ۠7dxUĤ 1?0j`ɞgAxSKa~wu a?lCn9{wROgIXOm?= q`M FG-pƈ5 f07{ ,#ApeU7y Ɵq8>{=?=CnU _6ʖr6GSZT^d EEd1-Γ=PLjPd,PchT\C}/d]/ 6{r5&$ e:vPʼn?ꀉC㧾K/mP ˆs]jFU +jdL>0>7 Mn2:Ѱ\$E(q H$( "N` P~R#H" 1j|WIaҾMaQ$c6'*h PoQ:pмxTbVsi` $ZDٱ,eXރ 2|۟52ˠF'HB *~Ya&WEžu 9_IRgjpso\gKJFR A2/ >4Cq +G M"5>sYiI^|a<Ʊ/7o $NQ~za:>MaR:L]'")%A#N$m0ø!h$TP_aX|[U|Un`TvT3,HR8ZADTɸqK7mN;~q)MkZX ` ^깲l`kMm@1SO^9{X#ub2T>nϢ8!E^~k[dC)e z?geT꩛/[{>$ 4gKgx[ȳs`^jz;)<>lH%߼Ѐ1l\ظ2qe` e I̹pP&B7(& H¡qO"(KQH9FŠT&I{ѝmm޲/jyf[(b`GΑ6 ;a;α4ZOB(1 T'1(iJ"J-!Aii8\*2" 2Q!7H( Uõ$c<(R'$*)]i2?: GR"cUKxByC/Z!x7#J3 *Ǒj;xXN0j'yxo6.aRաh,PkWfKd/3l,6Sz]]EbI4g,W.^:$M@2J'O.h&vvMHJG DELR煕; ӈGi{_Nm oKz=TJ̨ƳaҤHJ`fyY7^2Ӟ# %GaP|0-;QN%Z؎ V*6 "qfyYetX!6Uquy˹XNBA4ACG4ԁI6!a@`mDq+t9}OG5mr/<]y60_NBs,IE <uu`s˗$noÞeܴ^{cБQd&]G RS'lx=w+T*iUb,X%|sG򦤪q{2{`)G yfu*Y& $AHbd? Suy$ .0{O۶_’>䃟$-Eƒ\z"K'iΒL!9VQEs G '0 |Gw+`|ۿ_G H)Lj*Mמ';`oG XɅͪTK*s=y7=Taxu3PҔڪvdUd$ 4_SjJz8y0'R"ۡYqKFPFBa@шv2e2Ǻ>mpLƩȮO Fzw|bXă? :H2)J֜eu?^NyI.>'g:G'/^֔5M'b+Lc,z7ra8r/T(W{Mj 5M Ҿc{՛LǟkFY8ݓG() =mt:HFIk1ѵY/$`ͳLrq)F2@q,iv>2FjЂ)B ml eUdS\21G?XoU Skr+D7I5/63H?=}F\"޷6OUy7$j렩 qF*] iOoLC=T#ZICk^Qb$X}U%3ZF:rWSJH;lRGw*.ss\};.~~~7*CfUmdn>$xz[!  Dk/Ɋd(q>q#L_%szZ)]ny0X}w՝Evs0>:nOn3 d6"Q\mq.nk]PZJڳrƹ @+R`*C9ڹ))Ǝ5_)M׽n4߅^ēaHi J&N d*r`JA3 Ha!X4B\[eZi$QFX *PL g(Fe=q\r9E  $\KeB N6VDXhЀXD(R+#cP( WpPs0b-CE@nei(8U1F X8hm ; @xl@?"9'1X7nF$q7-Dnp}\2kEkn  <(F#TX0JLbґwC5P&3팹AiY`ׂ4X?_= =o{g`-h-qOyNT^Py_yGXM|\! š+v ϵH|i)Zu׌w_:%P/M~~!Pb߀_]/7uv2 !2)ArߧDm +%(WidB<[b|>nn(E6w'_oӻf0*I.QN{J&_7Z OpZrEi Aeo'J-+On.HA>hq8k$y7ZW5^+'q9R:0?R2)r.SM,D?S񇤪M/JBxv9g#ie;HB @#A&,m5+G<1)b*oe<:JL.QZxBM`/@ܻ'!GrH0#ev1@@iTvkb5e[MӢ?~cbM[8361nsX.{"n.~BL x}s *>yu55@.Bl͍ZThi,8!hh=d,''e9#OuO X ?{t\aiER(6P)"@RG`L1[" 8   P(h+O6٠7[oVMR8HDVLeJ #Qomڨ--FʺXWp\e/7<8Rcc^Z3~^,yǷ.WF ZCqPaX.LI]~M XN9PԫU; V@Ũ抻q9(!7ܩPĸU^ݨ/HIIhC*[^_yB^P¤y{fr-5?Jy$LJ6#3A";e5w ?X7tNQQ6S2GS l@*^Q"3G,+u^yE"J^e-DF"-+<E#1aCY[1Icͪ/"yoUoN;Ћ-+Cpv0oIDvy1'QK.=E\ORX+hC)MT[ L;=ɽ@^@kN;h!On-6XJYQqn{҅#'^0m.Ԟܼضz,&Y?d>^jIGIxE:~p 6 .Hz邤.Hz Oze(v&[G7Z( 666U[ ݡHJv K\E}>fZ?.F8`<tQ+QlA_8ޭK/:t2ǽa;:(BQHT#6kIj( 9iK+# dXe_m9ٛMZ 1";0@1gڮE_(3fuժ(/G}W^I C\6)ęgvt Y4NV&8vR  Zb h*lbKHH~h%\edJiٙ1"`wa_NbؽR~J@؄?llfaގwu1Ro$+5c*4$MI<,P"11b l29pAkq h]442 a 'T40: r2&7r\LKç{'_#^KyHtZS**!✡YLm| i+;ƣ0Ev~"#; #Hͽ\mEbl9*2+ ʌnrB^~/k/T[JޘW|0V4Oyz3L&[ۺִwc8b?;4G%L S, ԗ ε2b2| b@f1ȺD= "' 9軅> Qvӭ_'AopդW;l&zVEW|tQ։ɷ:NԊK? r/zp<Ǿ[`.|rrp W῿9?;;o^^'9xf?=3V~˟şgz__tNw@:]Y77q[kןM{w=ٽ'&[wC 6w^eASJ&Ȅs?rz$w^O3U\N1p\R'v8L7+._M 3?V 1Xi3!}n&ӓy ɴ?J,^B Vyopߠ&O&danS/\d&W_%ҥ3Oa?ƎozMmR#{{2 6bx1Oݟ߆X Lg[V.|~nF&v~|f~wuzv_.ƽ[,`cO7}<9&)-P# op!aܥqO) ~X$4? :yOlt|d0)A40Bྸ7ht:׾o#|7oJ?QJ㺠b7.eq.iŽM?v/v6ʊʤ2Wz$g;D=w0TnFNF"J\q 𨍱TLeO $Y0͞[dAV !㴀"WK58;=ǭh`>Jp!j?{֍b7n[ ho>-Xo;oHaQgKʨoS]fe}}{}\t'>An<\5Oj0SVP{S}6"K?eu4iOS%אw(lɂ%m#YBrjs|V8J[x9TiU7EDwI Ԗ(虶J53A0D O#.ʔf=ka'a&{2Tf*ș0D7%l ̴d:B<#qHuRלnT^IΕPo^ymIyM5zk.DhqPvXc ƫ H6:Wr /<輭{q^/E-J-m:btyQu bY U{*lS=V]4a?v]Ϥ! N! `X~wQ{ǓZ#fklhF;80h]H@J`EP^5Z\1F =EҢ7Y(]eWZvez:޲JZʫJ½O`Bv`NT{$>piKn8sYufD&kATnH\r_FѓCzx (C<xFwXyښQקsgsc:+5Spn }.[8J[2>-Wgn?vHy{] B 2OY0w拔ӗ/ 0_:{n )Lu1~f >+jFU5YDž)O>ևfzEs:CXԔFu [0rC4[:(*{Wԍ$FZi>Ś)p:TB}3bMFbMzKG)kS,&TMUZs^˷`D%u=|k584P/i-$9QE:ў ᚒA#r9wO UW ΎPH^ev'я;۵f)}.v)OpȼfY?ﬞ.uB#˥\iА2qwwʛI]Vk:O97vW鵢#![f&l2/)oW!C=EvYs!5D`zcEif=]NnҊ]v5N9Chԭoh1j=W`hM`%C뚳;[ŵ8ޮT:T6C!(]oYҡ9y৪Y[efjN鐜>?npQOr(IZ^]U 5/=|!vW8nDT\D$Xh E- gq;n+A[4#] j ]˚>d^0,.e[S@٨wW-CgTAH#|hDǔ&Gz9DE!Ԡ=3qZ4^59ܫ(k"#4Hr%o-|b !=`!XLLP)"DS"^ku?-PKf }VUMG]+f~2P$hr[Mb.||7HKԱEg]7-eBwc9[2ϧ- ~l@k-J_XT P±!J*8c5ވ@R@7R j\sT)tQH@h RZ<UlGsrsFdN|Jdf"pkLqp&O[uz`3 ]P)XZj F w0 &湓@p7YFD[d 05{jzh ]7'ƣ+p<hRj9p"4xD'ŅT{L#SFGpGQpc4ulJ RYY3YуBiw1C5=>];j`FE,K"s)Bs5IbDQH|$rp4BF}~4!d*YВU}f42J)y|>+.ڱ`D Xb*}YQhF8EyMpD,TFiMg44T.z4'#h#LLF4]2x*YC#rNQ\*ĭm}^^420(2AQIxf )Izٱ U@lY+j<ɢq PgTZXr{;`I}jP՝;UAFb㛫5`MrDAI'*+N$iwB}nQTnur穜p9g(q[4.i8E9J%nT5Υz owHӵ w|(RkQkg%'o>tɌ.b BDv_1j&i]KRWBb"YDDZҤ4wmZ=[!h --.@m:Pkd%iצ)͜t ]jKvɢ%gU` (c9,,&) X祤tȬ˯OcΞ^`5A ӣTKmMB,Š0ٻF+f+.zAc@cu*˖J/)Y*uQϪd93+AF0d sݔaDf}#[U$v`ڤLAQ)t Ip/ɗrSTL,Ua8 m55ve6e鬴eC'd-k[X*-H}ɇQ#-Ű:j8EI FQɅ"ϴƽ ht*d㳷9U6IYgJ*W2tI1"Ye}h)|Qƙ= $1%\h~>\p%Gnנ d[[j1 RAŒqsMΣ }FWa41|dO8yu4 _Ԓa)Eu;KDKβōoL5Nt'~g;B[a~kgD'S@PU"!T kz0NY18`{Uzgwޝ@%Cd9KcHQ :s-~l{d|f.nBHØTT@;Ȣϴ(DF.}K~0Rox!X]rw乀j,y1}Xlⷕj'=r:*K!6;ZP\Qqn2"~Z=Fo֑^Gn֑XG6̈́cƯ0/TPl)&46' $.RdFNkK8F$0>/]b4X ,:!_ &@zRQxmrN"|bO>6@ 4%gx9ހ@lGBp~O 2h5?Rɗ"5s4"5H^"v( gHw;Dnld>a ˫k B($HX0RASԗsZ Ƌ_OO[Æb<}8R>7%T*1qe ;3;zN{ M;ٵׄ`1DP6POǸI_wVp>o7,pʖ׋ٲvj tѓY;bdp Dr;xjƆS { Xe I2mHc iր޲ ѸJ ZÒHG\g #er@tcRgEQs "i{ ;~6pY)c6V)S6aلcD#ͅZ\q~L =NxUcixaJSG5{nwE$^c6e}G 1u(O( [Mn5:Tkī3BTq WVސ07k/f߼fQJR&b0n'Fe > "Nd霵w:z@l}~66ofAU幣H9acJoZ<ľCȵ}(|ףvYVSh9q  > _42)՜/-A~9lw+ DҬX4'!gJj}sկ pP94GtdJm2bjJKK@'mTJ״cw3O/k__6Sk5yc~@^5T%kahsuy]Қhck_(^8-=fVQ8196Zȭ`e0, mTjjSlc%VI Tx!$ wv(tPc 3ѵa4V&S ? RJ!hףvwsI uBwT['gi@nB ڟ~fni4ɒ/f{ Y-Gfa G,*[ͪ2Hv0#h/0>k2̠Rg9#X,y.䨂ԣwLOv| 8NƸ$ HQF鋚+ cQrÆÇkJc)ˎ̰_3&&' cE噄J&2B+Uѻ Xt'V$yeY}Mf:pQ tPG.%3zݐ HQT Sz'V34Vgڶ=. gV͑f?9XsM>|M6O;ЙjTd&#.#Z~[y{n훬ۓpBG?fӑO mhELa݌T^3nqʶO]11)zS4 ,J%͢,#*jxNjC/ff曙FCa+P"+M 1:H-:khkFc尸0 ffT!0XNdIVdykFF{x(~Xc9-(4KX49 :/A`TU&V*iѻl;zpK\:ObWw ]G3 B1hBuS[ERiD[˫筅 k| ]+t_h9se.:c˨a0ceJ=O}#wmmv$3ĊYWFRGۻƩf M*T W% g-Fa\G;CVB R78U2$\3Eۈ/h1mz5Rc HWe^J4.1ջte I"i(FT܋FIqeWO(Vzr6IdXtR…B&^xy`( *[MV)Hm+wC! (wa@"3U1x|ua<]J=O<𘵈c)$\Z\@%4&W}4I` fiM':jਉO9!LŰ?kPJ Ar+9!CMp\Jzé6`T@F9V0iÐrP@*LdIqFPEL`+bܔŐc"R`ĖMcGN:BLkW$n"P\TB{-WM,A!d 1[Ok"'cl0RFzZ`0'9 $y'ƍ7cDc$!iE"BGN:$1V"D!L=vBdA[p &*"}pt 5Fժ]cSqhO6v(=i>^gpG=jq4.wg1lNAgER:x:䍍bsdM"LNb=)),,ed ot4Fh&M2nXo\k|x -uZ7h4ghMq^WP j$ q'65ӌ4]ÙNlۻ}@ .}18RW:W{{_iۓnzݼ=/7^{w^o[u0kϯSS Xvwڛ\ ҇)V~;xye_1In/9aI JAAiw2P23yO,/Gq'=|{Ah2蓋ؐ ʼnf^/^w["EӖZ4%Ǡ[QCߠIOd>5I? %fFytKR 'KN>σ#pP̃[7f>TXXO!ǔ71V ȏO{KcѡYkr[ʸhBS}#ت"=VQjf@w{4}@gիz :8Þū`zz۔-Psi`/R Oh~ɰO;\qf᯳kM )ZudPFtrB.<$D[]h 7uwy KTC^lԇ'Rqo4v̕D1uk+S[Yk:Eb>^Q]~lBt Y|sW~kq|s^^^ibh~ךLD7GqQףѥ(^9x>košsY"{f y=]NsuNaߵ/UDk'/AnZ&ul1[6%|lfj٢b,n*)VȪS|'WZ/[mDX;{2νDW.SMM%MMRB 4UGu:I$ R$gH)c> 0A%NƤBZkXq&B8Ibyd{ihϚqUTCrή^兽刬.mao9^hM#ZHGsp?@>/1ĝ@qQ Vʑ?H<ב *O܉ޱc vvw~ۇ\j-`=C>`MbL>%rG9#hL/8a ;"ΰ?۹~Y5 &o@W FɳEs# {LW ?5Kq1ʧ6WK,`e(SBy:/pI7y>Nn<%}~>쳻9J=|QX~"?]wiHiX2d4., ?zvׅ*oWr{ [?C *'R5y`:r[Qs3ZC~:Ik18+QT\) r}7ET)eJ3^)yf&2o {Gɵȭ0/E _ի U؏[afq@Ky^q%FR+Y|ErZ(fLz@݆aߙ>AFٳvnO&|~et)|JQ} :7`^v4-bF rҏ&>.nAɵ!RxDX's *HO N G(M8ĹӫR󃷇~Hw<,BapD) bHbRm!FX"Ŋ,y`Y",R`dIDb^SJKߠ| ;7'u*/ѷ'*/6%5-ʷAE3;8sŘXKee4ڋ4ac]a#K s BENCbdߚX_ ؇a!5P4siAgy. M%vn)p| l%5)KATL|=B@3IUH=;zcɛ 8(_TP'yҺ`6 h~Bڴ7Dֹbrc(wg6GVa ç%`Ri$7 ̃"N)䍥F?ߝ//ooU9wYٗ88F?> '<`p5ݨ;gxG/ ;ٖ]l\n3;d8N4x5o5o5o5_xp TjHH%IIHh0OJ]cˆIc Bӄ0xgG^V džb&!U_ p!Ft(d7u 7f5{T~vO']H *郀k:_NVc%ՄP I̵ aBPFDaDcYl~x$J-ōMb}?{Vs \UXdr2[/ŋ YظKj {;{D:e{oߜbמU[wd:}W;6;"ך>|;}-FD'يs9=fiVW}Z1VU&rXteZ_rQ#rKd,0?7Le[c6=\uM=Ƭ> ˅;cNbEXܴ++,= ³_zk]>5*'pr65콯G&Τ<=r{nW7nW.a'A+Tƅ"5 v cHjڱMɊKƨ ޢQj\_gw֬c;Y&F񭿻q1')q.!Q)QE!+C^M͒U9y̜,q>k{.pkw̕ccn軪sL&?RfܼwdERytU>ԺYt/P}BBL6xiQzv! ~qbwnS ]J~u/MBx<3l-Vhdӊl4QB?32kMhg~$50ؖ›N`A~$bp,+ lld2s@_HjMzoljn:WUUU~Q@H׭DWƚȶ\' evgKZv90llr 8ߘ{0L {*zZgLÕjXZk$4 ![Z6\4V-?gg;U=QwwJ,'dwUq?f/܎;`u/8`fK-zW+by}+ddR᧔D %joʖ=)~!TZBA VWFW21sAxKPM1TX4vSN6LՅba'I{[fb%4Y }~I a ևUi7+β^ ?;JÓ{3qj92T~|;ʚ?q[zBö-+T l\F9AXK~QU 5ajH!Vi^ DM @0L U4-^xQ*nb^-(^J hܱՁiT854z i)bұ];,L`zHltu=}Qji |+5RZ9=*?_KP񕈋ޜ,L3y.{b!b!]$_($GZ+TFE-Qjh!l0V;MW2`ꋵ+]yoG[N(ȡ~Xzvm$?Qh{@tT) s6v[BŐ=9P*)80}BI@h i|(taz(pгrl Rf[7} } } "l,ǩiLY"4a0Q$#FD,Z8 D8N iwv4z3L57mlѣU_ b Rd -MojqAR+A=ކf481QȢĩ!|%#I"cf(iXH$RPH(*65S+LuԼuN27?W&'u%`cUh|FF֥BMmLE) $)HTk&̐8M 8LT!>bdk+cͨ=hsu1WZʐFk<=7/X NyPuQ|kъQO1G0f&$4JIELC#E`uL%4L6r YB_rQN]? ;Ydged5de,|`#Nx1~"-H'c.Zʆ )~sXi%r}zV_n}5])9p ! ux*D*ƌ: s^=8 L5S~ٰ#s~蜻*A`POu.Mᷜ9 [$p>;ɵ$dvc-.Rc`m`L&hxB9! ?g~WI[9.̦Y6o  ۭ*|T;ʠv=V7ns=ѭ UtE<R= _V·&3эݺ{vT24jNo}gAtNgW/,fe(DD$N;~/GH`x1 ;,⎦$H!GCWf2N3uIp?fys2Nٖn`{lMVcg.-X4d" RTSYe0Hs^ve^$ A8\|51iŽ-p0*O*\ծG^7UZ뢟ۂCÝaw6<Vs>4.$Hj_{A&zm tR,X jxk0F ބ׫+I 40{ܧ@Xh0pBd~VRIQ4_AusPƘ$דV _ 4*ՠX Hm&K6\d&N x|ꠧWOoҤY4;`y||=։d>wFt=gfG^&A-Ob *b0^ݏQ>}>_7_|?o^ye~pw7g|??c~ŏ_>翾~ϧ`_Y8{evtO/õ%|0?w|2\v{COh4WcwMB$rEϳGˇro{{e@i&0'd#6 jkmG_^0.M[l Eg_~J-;vBzq,SÙpHl< vvմnc(.Kdc pkw4tuGIŀI4< VpgIOJff˧??N3:0'd=&)etOVݷ խW0+7 z]Fɛ~̲3!>:>;rUz1\71M#C,)m=ٸ7,y$,sug^u~ߡGWg&_]_L MӴ=p## x<e(s'S& ˷< eNZ{q=O'DH}5 ǟy듋Zbz8r GQe' ~bB"T0IV+^5/Op2ߢL,{.HȮEWׇL7뉢$YpNWY9Nוú:TCM&CB0vZ)۳f0َVjwPMk*q¬l KyDK5>9Vi (s]Iw^-<;?/X80 peX,'3bqPl0緱F[*V%Ճsִ9p"cOFաy69 'P%r "}c'a)s(޳*2E{}qhn佶rw/J x8ɧƃu ΢>9dXSϵs^E_>T_Ȱ %KR( >Ѳ}Y_]q|W>ip}5Mb5@3CfyѪE*&'Z% f?Ƕ cx;⭇ >~io俯+_|5''Y`^uee(Y<$S"rwrcBMV(F~ҸՋ2Z_y4n.>O8=5[]_޿< ?y٫[4bUDoJJBgʹ ЗWE~eCi๹)xTPHEQ#ڦ!9"T4:Ֆ8bwdHc K1ICHÒY(TK e:OI}scADHR {K&|N`-cR*4ca:40f(`v?-@)P6IBD#rFԝ# tI"|'q6ek OaQוbƱTGc?eRSX[.Ls,G#htk[{ GD8&XoiNa>{ƺo5jM-< Sc%G0gdm' ?dx&%ۚ>DOJO)D~C|6`VGM}d -ڇ,Xf.Am!!X56!n3"PלW_j SS%I]LT0sdBr)"[ã:aWrP^9VLseS^76M ɉv4=??%\ǖ\]\+}O(Rޓޚ~4lQ]xwhp 3e6*$7R#}_B"o$)$'Ly9TH+cqrJjhWw;9x=>zI[xq&#iUK )O.d%%Wʕ0t-3NYn>W7 D2sB]lf j`|~wٟj00P? &s=}5O򢭶&0W;O4Ȏ9l$^g?A2.S]yQoygFp}GJP4ir \5TlXR!IiI*/!ņ]fV(a*>}]mbf9Q'%i#WQ'3-mn+Ki.*Ժ}LFڒXŒI,JIѦ؀cp(8yA4qF E1A/XcUw'cC^V/Ù\~x k))(D!eW+@PG; ƛ<$(8uJ {(Xĉa*؋ p@BI^}a=C!u>_!QXxG[4nijjqÝSK1wЧ҇d.DU%j58P!LI̔QhO N84hv>G5EkxFu!v]iB1z;ܮ6ěg&lv_A̋bF:@d2 4n2oBs2Xo7,IҞ..P[ f\ @:wp(gՈkK!®fU@R_m|jL=f Qo|0K}5d>;=c?a^ #Sjoo5+Xbe'M8Y΃;)*XSPK rVU.BdЅ& ngZ΀Twfr#SR\zfŭ΋@ \'ܵX9܋c >ȁlِaI*}[ZjCnl,\ԈhZLW H P}5ptRNFI$J6H[|(1Z j߫c2 9]8s(?iny_2fHskAZ`AjXvN1R'쪳ܝF0N19an!Vo!;$MǺY̅ބ<ߤZvݺEӚ@KĴ24䑫NZ?G߶n2Ԡus偍긾uk'Rusneh#WQ'-0kus偍긾ukp#̎Fm=Һ!\E7t[",n.[v2K[Oa2K?&l(*r%07ÈlQ88LVPP?lvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005440012415145123560017677 0ustar rootrootFeb 17 15:54:46 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:54:46 crc restorecon[4710]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.018853 4829 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029283 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029341 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029353 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029363 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029372 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029380 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029389 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029396 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029407 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029419 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029427 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029436 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029444 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029453 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029461 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029470 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029479 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029488 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029496 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029503 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029511 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029519 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029528 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029536 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029544 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029553 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029560 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029568 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029605 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029613 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029620 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029630 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029647 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029655 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029664 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029672 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029680 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029689 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029696 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029704 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029711 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029719 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029726 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029734 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029741 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029749 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029757 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029765 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029774 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029783 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029790 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029797 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029807 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029815 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029823 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029830 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029839 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029847 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029855 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029862 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029870 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029877 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029887 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029897 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029907 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029915 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029923 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029933 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029943 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029952 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029959 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030949 4829 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030971 4829 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030986 4829 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030998 4829 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031009 4829 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031018 4829 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031029 4829 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031040 4829 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031050 4829 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031059 4829 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031069 4829 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031080 4829 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031090 4829 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031099 4829 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031108 4829 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031117 4829 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031126 4829 flags.go:64] FLAG: --cloud-config="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031135 4829 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031144 4829 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031155 4829 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031164 4829 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031173 4829 flags.go:64] FLAG: --config-dir="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031182 4829 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031191 4829 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031202 4829 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031211 4829 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031220 4829 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031229 4829 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031238 4829 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031248 4829 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031256 4829 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031266 4829 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031274 4829 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031285 4829 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031295 4829 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031304 4829 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031312 4829 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031321 4829 flags.go:64] FLAG: --enable-server="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031330 4829 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031344 4829 flags.go:64] FLAG: --event-burst="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031354 4829 flags.go:64] FLAG: --event-qps="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031363 4829 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031372 4829 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031381 4829 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031391 4829 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031400 4829 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031409 4829 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031419 4829 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031428 4829 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031437 4829 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031446 4829 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031456 4829 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031465 4829 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031474 4829 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031482 4829 flags.go:64] FLAG: --feature-gates="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031493 4829 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031502 4829 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031511 4829 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031521 4829 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031530 4829 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031539 4829 flags.go:64] FLAG: --help="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031549 4829 flags.go:64] FLAG: --hostname-override="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031558 4829 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031567 4829 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031605 4829 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031615 4829 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031624 4829 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031633 4829 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031642 4829 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031650 4829 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031659 4829 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031671 4829 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031681 4829 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031690 4829 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031698 4829 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031707 4829 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031716 4829 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031725 4829 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031734 4829 flags.go:64] FLAG: --lock-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031743 4829 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031752 4829 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031761 4829 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031785 4829 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031795 4829 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031805 4829 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031814 4829 flags.go:64] FLAG: --logging-format="text" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031823 4829 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031833 4829 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031842 4829 flags.go:64] FLAG: --manifest-url="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031851 4829 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031862 4829 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031871 4829 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031882 4829 flags.go:64] FLAG: --max-pods="110" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031891 4829 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031899 4829 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031909 4829 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031917 4829 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031927 4829 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031935 4829 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031944 4829 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032897 4829 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032907 4829 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032917 4829 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032926 4829 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032935 4829 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032948 4829 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032957 4829 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032967 4829 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032976 4829 flags.go:64] FLAG: --port="10250" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032986 4829 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032995 4829 flags.go:64] FLAG: --provider-id="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033004 4829 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033013 4829 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033022 4829 flags.go:64] FLAG: --register-node="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033031 4829 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033040 4829 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033055 4829 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033064 4829 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033073 4829 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033088 4829 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033099 4829 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033108 4829 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033118 4829 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033126 4829 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033135 4829 flags.go:64] FLAG: --runonce="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033144 4829 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033153 4829 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033164 4829 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033173 4829 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033182 4829 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033191 4829 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033200 4829 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033209 4829 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033218 4829 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033227 4829 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033236 4829 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033245 4829 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033254 4829 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033263 4829 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033272 4829 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033286 4829 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033294 4829 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033303 4829 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033314 4829 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033323 4829 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033332 4829 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033341 4829 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033349 4829 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033358 4829 flags.go:64] FLAG: --v="2" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033370 4829 flags.go:64] FLAG: --version="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033381 4829 flags.go:64] FLAG: --vmodule="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033392 4829 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033401 4829 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033631 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033642 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033652 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033661 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033669 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033679 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033687 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033696 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033705 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033712 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033721 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033731 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033740 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033749 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033757 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033765 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033773 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033781 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033791 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033800 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033809 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033817 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033826 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033835 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033843 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033851 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033858 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033867 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033875 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033883 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033891 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033899 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033907 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033916 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033923 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033931 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033939 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033948 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033957 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033966 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033975 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033983 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033991 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034000 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034008 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034016 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034024 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034032 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034040 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034048 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034058 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034068 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034078 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034086 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034095 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034104 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034116 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034127 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034140 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034154 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034165 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034174 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034184 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034194 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034203 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034211 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034220 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034228 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034236 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034245 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034253 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.034265 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.046313 4829 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.046362 4829 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046491 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046514 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046522 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046532 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046540 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046548 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046556 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046564 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046602 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046611 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046619 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046627 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046634 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046642 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046650 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046658 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046666 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046675 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046683 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046692 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046699 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046707 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046715 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046723 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046731 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046739 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046746 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046754 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046762 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046770 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046777 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046785 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046795 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046808 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046819 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046828 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046837 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046845 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046853 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046861 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046869 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046877 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046884 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046892 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046900 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046908 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046916 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046923 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046931 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046938 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046946 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046954 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046961 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046973 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046985 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046998 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047011 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047024 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047037 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047046 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047054 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047063 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047071 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047080 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047088 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047097 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047105 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047113 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047121 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047129 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047138 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.047152 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047371 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047383 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047392 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047401 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047409 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047417 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047425 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047432 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047440 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047448 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047456 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047463 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047471 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047479 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047489 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047497 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047506 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047514 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047521 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047529 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047537 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047545 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047552 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047559 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047567 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047607 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047615 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047626 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047635 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047644 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047653 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047660 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047668 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047676 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047688 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047697 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047706 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047715 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047723 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047731 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047739 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047746 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047754 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047762 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047770 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047777 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047785 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047795 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047805 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047813 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047822 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047833 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047841 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047848 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047856 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047863 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047871 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047879 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047887 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047894 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047902 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047910 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047917 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047925 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047932 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047939 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047948 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047956 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047963 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047973 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047984 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.047999 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.048223 4829 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.054208 4829 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.054326 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.055980 4829 server.go:997] "Starting client certificate rotation" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.056028 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.057439 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-24 01:48:50.912562874 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.057631 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.087021 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.090911 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.093717 4829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.113809 4829 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.153307 4829 log.go:25] "Validated CRI v1 image API" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.156512 4829 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.161785 4829 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-15-49-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.161841 4829 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180203 4829 manager.go:217] Machine: {Timestamp:2026-02-17 15:54:48.177425026 +0000 UTC m=+0.594443014 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:420e9fca-55f5-42fc-a60a-919d603b95e0 BootID:e093bc13-e732-4259-b0a8-2325e80c34f5 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:26:91:8b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:26:91:8b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:91:01:36 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:31:97:72 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:de:60:64 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f2:de:06 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:32:8c:24:24:37 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:68:71:55:29:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180428 4829 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180788 4829 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.181608 4829 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.181956 4829 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182006 4829 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182323 4829 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182343 4829 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182989 4829 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.183046 4829 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.183876 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.184023 4829 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190852 4829 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190887 4829 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190925 4829 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190946 4829 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190963 4829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.197549 4829 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.198725 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.199790 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.199888 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.199962 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.199992 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.201663 4829 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203277 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203307 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203317 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203327 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203342 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203351 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203361 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203377 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203388 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203399 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203413 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203422 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.204317 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.205164 4829 server.go:1280] "Started kubelet" Feb 17 15:54:48 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.207758 4829 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.207719 4829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.209168 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.209844 4829 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212170 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212328 4829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212353 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:47:57.847606568 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212910 4829 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212942 4829 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.218451 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.218741 4829 server.go:460] "Adding debug handlers to kubelet server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.220180 4829 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.220526 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.219756 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189513b30e988654 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:54:48.20510882 +0000 UTC m=+0.622126818,LastTimestamp:2026-02-17 15:54:48.20510882 +0000 UTC m=+0.622126818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.223539 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.223906 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.228008 4829 factory.go:55] Registering systemd factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.228046 4829 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230154 4829 factory.go:153] Registering CRI-O factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230206 4829 factory.go:221] Registration of the crio container factory successfully Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230962 4829 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.231067 4829 factory.go:103] Registering Raw factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.231109 4829 manager.go:1196] Started watching for new ooms in manager Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.232677 4829 manager.go:319] Starting recovery of all containers Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235517 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235604 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235632 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235649 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235668 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235695 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235723 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235742 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235798 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235817 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235832 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235849 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235861 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235875 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235887 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235940 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235950 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235963 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235975 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235986 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235997 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236009 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236019 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236031 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236043 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236081 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236099 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236111 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236124 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236135 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236147 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236158 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236170 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236187 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236199 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236211 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236228 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236260 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236274 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236286 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236299 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236309 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236353 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236372 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236388 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236400 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236419 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236444 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236461 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236480 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236530 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236555 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236609 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236627 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236639 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236651 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236663 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236673 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236748 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236760 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236772 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236785 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236799 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236815 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236831 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236846 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236921 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236938 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236953 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236995 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237011 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237027 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237044 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237069 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237140 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237155 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237172 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237195 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237212 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237228 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237244 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237287 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237327 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237355 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237380 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237397 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237414 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237432 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237449 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237470 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237487 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237536 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237558 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237597 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237615 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237634 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237652 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237668 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237685 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237700 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237724 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237749 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237780 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237800 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237828 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237849 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237865 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237882 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237919 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237937 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237958 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237974 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237989 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238003 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238020 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238035 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238049 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238063 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238078 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238093 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238109 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238137 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238152 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238166 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238180 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238195 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238222 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238238 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238261 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238280 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238297 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238314 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238330 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238345 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238360 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238377 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238394 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238410 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238426 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238447 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238465 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238480 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238495 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238509 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238527 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238544 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238558 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238642 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238665 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238681 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238697 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238717 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238734 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238751 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238800 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238820 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238835 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238852 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238868 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238884 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238916 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238932 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238947 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238965 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238980 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239004 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239028 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239051 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239075 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239091 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239108 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239139 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239156 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239171 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239186 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239202 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239218 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239235 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239255 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239273 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239290 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239312 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239329 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239356 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239369 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239388 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239400 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239417 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239430 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239451 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239463 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239476 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239490 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239504 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239519 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239532 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241561 4829 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241623 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241644 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241662 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241680 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241696 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241710 4829 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241721 4829 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.260785 4829 manager.go:324] Recovery completed Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.270999 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272889 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273549 4829 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273584 4829 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273620 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.275027 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.277996 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.278044 4829 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.278077 4829 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.278130 4829 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.279945 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.280007 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.295216 4829 policy_none.go:49] "None policy: Start" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.295966 4829 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.296003 4829 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.321678 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367494 4829 manager.go:334] "Starting Device Plugin manager" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367598 4829 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367629 4829 server.go:79] "Starting device plugin registration server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368216 4829 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368242 4829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368445 4829 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368674 4829 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368688 4829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.376418 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.378466 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.378550 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379767 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379924 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.380229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.380279 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381376 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381874 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382274 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382756 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382872 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382910 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383723 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383913 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.384037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.384083 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385314 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385671 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385734 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.386984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.387085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.387171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.419275 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443834 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443914 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444209 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444363 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444384 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444660 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444744 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.469163 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473545 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473617 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.474279 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546514 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546614 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546736 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546853 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546882 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546798 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546893 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546935 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547036 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547090 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547110 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547179 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547230 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547299 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547450 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.675302 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677154 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.677896 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.720673 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.729830 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.747878 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.767522 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.774275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.796180 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5 WatchSource:0}: Error finding container 9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5: Status 404 returned error can't find the container with id 9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5 Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.799379 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4 WatchSource:0}: Error finding container 1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4: Status 404 returned error can't find the container with id 1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4 Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.799734 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5 WatchSource:0}: Error finding container ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5: Status 404 returned error can't find the container with id ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5 Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.820260 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.078091 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080113 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.080701 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.210323 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.213420 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:00:37.136907366 +0000 UTC Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.242787 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.242883 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.282273 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0f34543a23695d40405f45f09ddde644d1ef2433fb7c8062037d25b86ea9e7f"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.284491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e9c15b71a83cf5df98c86d34420ad30fc01bb981f737de4838ba486f68f97ae3"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.285759 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.286494 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.289520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5"} Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.354097 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.354191 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.356088 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.356234 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.560611 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.560728 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.621986 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.881517 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883499 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883516 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883552 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.884120 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.098559 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:50 crc kubenswrapper[4829]: E0217 15:54:50.099836 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.210879 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.214279 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:24:09.864195874 +0000 UTC Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296467 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296560 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296640 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.299869 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300192 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6175d8f1ddb2b12d6f0334a1d306f1e4f5ebdc17f9babe2309c0c4381e39463f" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6175d8f1ddb2b12d6f0334a1d306f1e4f5ebdc17f9babe2309c0c4381e39463f"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300314 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304830 4829 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="51919905706fce2ad68f049f159ac6be0b6980eb772b0f9d152d06da8a0da5d1" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304944 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304938 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"51919905706fce2ad68f049f159ac6be0b6980eb772b0f9d152d06da8a0da5d1"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306697 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307039 4829 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307247 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309729 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315426 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315514 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315547 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.316222 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.317504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.317991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.318012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.103704 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.103809 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.178993 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.179144 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.210761 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.215076 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:01:39.390360166 +0000 UTC Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.223539 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319486 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319491 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319501 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328448 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.329913 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19db8a23ef793b5e62f01237d70c305322e2d43ce7e2939ad74f9ec198bcd5c8" exitCode=0 Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.329984 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19db8a23ef793b5e62f01237d70c305322e2d43ce7e2939ad74f9ec198bcd5c8"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.330019 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.330984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.331009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.331021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.332748 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.332976 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333122 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8d74bf8d41be2eefa7a295c997bbf74d4c0a9c2bed7c0e9bac416a32f4def0b4"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334107 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.484200 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485268 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.485926 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.580056 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.580126 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.215533 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:02:46.940179913 +0000 UTC Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.339828 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.339816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208"} Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344433 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344534 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347035 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="75a3854d1046efad51952b098bedfdaa93df72ae94ae1b44638274a74ac7de02" exitCode=0 Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"75a3854d1046efad51952b098bedfdaa93df72ae94ae1b44638274a74ac7de02"} Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347198 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347208 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347312 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.348289 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349175 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349195 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349188 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349273 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.017842 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.138322 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.216662 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:00:03.023438628 +0000 UTC Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355118 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67dafa73e86617a4a84472e9edfb211bac1507e70cc570b39baf4f1a1c65e262"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355176 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84495d17d891b56ac71d7ff0b1ac041a6ecee29dd0493bbfb1130821bc83e5ab"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355226 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355233 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"daac1c001811204e8b9d046e40005e780ba97d6cdc858404b5a36078b62973b3"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355290 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356884 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356965 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.172184 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.217458 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:27:28.00322396 +0000 UTC Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bc697e5617d0dfcbb5aaf8b89ba0d526c05237f09023e5bcf4c4d2f254c64398"} Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366186 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66d512e305e59adc13db751b9e0f0f6dbd8c2279a190a066b6db715aab3a1d29"} Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366268 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366556 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366789 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368518 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.446295 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.447282 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.450901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.451077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.451214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.463068 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.686716 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689253 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689313 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.776181 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.218304 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:45:12.234755928 +0000 UTC Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.372983 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.373152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.373449 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374636 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374765 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.516361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.836000 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.218759 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:56:38.799013238 +0000 UTC Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.378695 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.378756 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.381046 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.099438 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.219879 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:46:06.003072596 +0000 UTC Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.249766 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.249986 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.380915 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.380944 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382954 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.383019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.383037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.220557 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:54:01.010815514 +0000 UTC Feb 17 15:54:58 crc kubenswrapper[4829]: E0217 15:54:58.376694 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.383451 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.836911 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.837046 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.221348 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:08:44.712022243 +0000 UTC Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.905862 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.906033 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4829]: I0217 15:55:00.222113 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:07:41.109595055 +0000 UTC Feb 17 15:55:01 crc kubenswrapper[4829]: I0217 15:55:01.222496 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:15:57.474350483 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.210855 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.223527 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:26:47.475358304 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4829]: W0217 15:55:02.621556 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.621724 4829 trace.go:236] Trace[528201369]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:52.619) (total time: 10001ms): Feb 17 15:55:02 crc kubenswrapper[4829]: Trace[528201369]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:55:02.621) Feb 17 15:55:02 crc kubenswrapper[4829]: Trace[528201369]: [10.001790051s] [10.001790051s] END Feb 17 15:55:02 crc kubenswrapper[4829]: E0217 15:55:02.621761 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.944683 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.944764 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.952607 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.952701 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.026308 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]log ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]etcd ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-apiextensions-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/bootstrap-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]autoregister-completion ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: livez check failed Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.026380 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.224660 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:30:50.891324652 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.225636 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:33:40.501232841 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.832304 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.833000 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835416 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.852989 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.225897 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:19:46.526017778 +0000 UTC Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.404239 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405603 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4829]: I0217 15:55:06.226400 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:17:31.513112658 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.226756 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:15:21.282385537 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4829]: E0217 15:55:07.944983 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.948383 4829 trace.go:236] Trace[723915363]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:56.877) (total time: 11071ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[723915363]: ---"Objects listed" error: 11070ms (15:55:07.948) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[723915363]: [11.071010933s] [11.071010933s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.948421 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955048 4829 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955411 4829 trace.go:236] Trace[757981753]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:56.870) (total time: 11084ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[757981753]: ---"Objects listed" error: 11084ms (15:55:07.955) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[757981753]: [11.084903601s] [11.084903601s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955444 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.959337 4829 trace.go:236] Trace[1247023109]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:55.050) (total time: 12908ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[1247023109]: ---"Objects listed" error: 12908ms (15:55:07.959) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[1247023109]: [12.908994853s] [12.908994853s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.959404 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.963223 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.964682 4829 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.965004 4829 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966867 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:07 crc kubenswrapper[4829]: E0217 15:55:07.988247 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995558 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995630 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.996402 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38916->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.996478 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38916->192.168.126.11:17697: read: connection reset by peer" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.997996 4829 csr.go:261] certificate signing request csr-8vjmq is approved, waiting to be issued Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.011093 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015758 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015807 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.022002 4829 csr.go:257] certificate signing request csr-8vjmq is issued Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.033520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.034443 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.034522 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.040440 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.055843 4829 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056106 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056123 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056190 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056262 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056316 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056803 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.189513b312a3174e\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.189513b312a3174e default 26179 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:54:48 +0000 UTC,LastTimestamp:2026-02-17 15:54:48.38264355 +0000 UTC m=+0.799661538,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067777 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067852 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.085940 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092050 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092059 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092086 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.108159 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.108272 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109926 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109942 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.203506 4829 apiserver.go:52] "Watching apiserver" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.211038 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.211612 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212129 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212131 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212929 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213124 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213164 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213621 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213620 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213699 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213879 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215061 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215420 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215523 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215614 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215632 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215839 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215891 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.216983 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.218002 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.222938 4829 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.227008 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:38:49.074023485 +0000 UTC Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.249872 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256818 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257116 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257165 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257202 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256887 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257308 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257333 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257357 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257387 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257408 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257424 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257439 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257455 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257496 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257516 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257534 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257644 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257676 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257704 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257727 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257748 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257786 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257826 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257842 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257860 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257925 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257942 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257957 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257975 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257997 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258022 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258088 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258113 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258195 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258253 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258275 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258300 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258351 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258397 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258420 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258456 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258482 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258528 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258598 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258622 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258648 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258680 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258703 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258730 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258754 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257957 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258080 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258093 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258839 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258848 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258863 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258880 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258108 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258153 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258381 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258379 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258470 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258617 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258623 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258636 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258696 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258999 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258809 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259040 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259048 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258848 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259184 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259209 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259239 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259245 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259260 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259294 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259316 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259340 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259404 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259425 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259448 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259470 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259494 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259515 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259535 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259559 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259599 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259624 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259644 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259693 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259734 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259758 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259782 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259805 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259827 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259875 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259936 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259980 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260004 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260077 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260155 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260179 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260200 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260262 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260296 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260322 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260343 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260365 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260389 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260414 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260435 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260456 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260479 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260526 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259310 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259395 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259414 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259440 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259444 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259564 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259646 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259719 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259818 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259924 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259987 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260099 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260130 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260137 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260158 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260333 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260332 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260352 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260409 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260476 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260496 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260522 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.260628 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.76060681 +0000 UTC m=+21.177624788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261638 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261664 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261686 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261706 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261752 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261772 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261815 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261835 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261899 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261920 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261964 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261983 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262005 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262123 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262143 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262164 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262184 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262208 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262228 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262250 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262270 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262290 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262313 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262333 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262354 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262396 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262418 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262438 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262458 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262482 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262524 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262547 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262567 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262651 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262675 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262697 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262718 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262739 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262787 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262807 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262830 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262854 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262900 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262921 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262927 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262947 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262973 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262998 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263021 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263044 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263088 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263109 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263115 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263152 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263169 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263195 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263213 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263231 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263288 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263342 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263365 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260721 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260952 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260996 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261086 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261326 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261383 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261392 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261446 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263715 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.264171 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.264272 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265636 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265663 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265666 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265936 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266013 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266449 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266584 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266885 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266975 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267070 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267624 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267671 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267726 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267868 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267923 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267947 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268025 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268044 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268098 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268199 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268621 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268698 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268736 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270733 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271163 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271367 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271434 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271687 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271871 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272208 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272551 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272515 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272814 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272800 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272902 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273381 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273451 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.274528 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273627 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.284135 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.284233 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302205 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302294 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302328 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302356 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302408 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302483 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302530 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302554 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302594 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302618 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302643 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302713 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302728 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302740 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302753 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302765 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302777 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302790 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302816 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302829 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302842 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302854 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302866 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302879 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302891 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302903 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302916 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302929 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302941 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302953 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302965 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302977 4829 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302989 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303013 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303026 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303039 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303058 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303070 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303082 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303094 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303107 4829 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303119 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303133 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303145 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303210 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303247 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303263 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303277 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303290 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303303 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303316 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303329 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303341 4829 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303354 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303367 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303380 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303393 4829 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303408 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303422 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303435 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303447 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303460 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303472 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303484 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303497 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303508 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303521 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303534 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303546 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303558 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303592 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303605 4829 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303617 4829 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.304308 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.304953 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305040 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305185 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305384 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305405 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305661 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306174 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306550 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306722 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308435 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303628 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308897 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308910 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308920 4829 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308935 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308946 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308956 4829 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308965 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308976 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308986 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308996 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309007 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309017 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309026 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309035 4829 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309045 4829 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309054 4829 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309063 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309073 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309083 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309091 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309100 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309109 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309118 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309126 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309138 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309148 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309156 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309165 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309174 4829 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309182 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309192 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309201 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309210 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309219 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309228 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309237 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309246 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309255 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309272 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309283 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309292 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309300 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309311 4829 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309320 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309328 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309336 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309345 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309353 4829 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309361 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309371 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309379 4829 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309388 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309397 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309405 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309414 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309423 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309432 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309440 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309449 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309457 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309467 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309476 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309484 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309493 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309503 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309511 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309520 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309942 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310115 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310156 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310255 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310679 4829 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311058 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311439 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311492 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311448 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311500 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311610 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311625 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311669 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.811651893 +0000 UTC m=+21.228669971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311701 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.811681764 +0000 UTC m=+21.228699752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311910 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312635 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312736 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312758 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.313164 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.314032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.314205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317679 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317775 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.319363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.320199 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.320944 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.323193 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.327815 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.327896 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.328443 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.328550 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.330044 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.330812 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.332691 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.332885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.334435 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335476 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335801 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335821 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335836 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335921 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.8359006 +0000 UTC m=+21.252918578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335986 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335998 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.336010 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.336042 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.836034883 +0000 UTC m=+21.253052861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.337642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.361619 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362126 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362343 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362371 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362392 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.364482 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.364481 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.366140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.366820 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369005 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369255 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369261 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369450 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.374342 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.377144 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.387026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.390519 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.400112 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.407030 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.409965 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410023 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410050 4829 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410060 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410070 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410082 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410090 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410098 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410106 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410114 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410160 4829 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410172 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410182 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410192 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410201 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410210 4829 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410220 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410229 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410276 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410286 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410297 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410306 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410314 4829 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410322 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410331 4829 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410340 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410348 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410357 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410366 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410374 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410383 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410392 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410400 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410409 4829 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410419 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410428 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410436 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410446 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410456 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410464 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410473 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410481 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410490 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410498 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410507 4829 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410516 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410524 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410532 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410543 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410552 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410561 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410583 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410597 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410605 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410614 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410622 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410630 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410639 4829 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410647 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410657 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410666 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.412230 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.413726 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" exitCode=255 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.418553 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.419178 4829 scope.go:117] "RemoveContainer" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421330 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.428697 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.432196 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.437419 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.440696 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.445760 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.452244 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.452902 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.455564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.463286 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.464133 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.464909 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.465297 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.466976 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.467535 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.467762 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.474773 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.483476 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.493169 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.500835 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.511874 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.512831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523973 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.529460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.532149 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.538383 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.539368 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.540155 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.541864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.542079 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.542535 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.543819 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.544566 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.545244 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.547733 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.550734 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a WatchSource:0}: Error finding container 15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a: Status 404 returned error can't find the container with id 15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.554440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.554536 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.555007 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.556228 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.556738 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.557940 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.558645 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.559141 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.560392 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.560994 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.561464 4829 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.562049 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.562718 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9 WatchSource:0}: Error finding container 6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9: Status 404 returned error can't find the container with id 6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.566854 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.568021 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.569794 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.572490 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.573598 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.574434 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.575484 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.576173 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.576632 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.577661 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.578662 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.579440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.580468 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.581106 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.581993 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.582906 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.583468 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.584302 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.584942 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.585431 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.586525 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.586989 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.587829 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.587863 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.612273 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.612298 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626828 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626856 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.792191 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.805824 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.809168 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813632 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813731 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813754 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813739465 +0000 UTC m=+22.230757443 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813828 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813835 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813863 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813856698 +0000 UTC m=+22.230874676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813874 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813868788 +0000 UTC m=+22.230886766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.815615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.822322 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.835365 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836925 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.844357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.852587 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.861676 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.871151 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.880843 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.890951 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.899674 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.908177 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.914361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.914391 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914498 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914513 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914524 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914540 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914567 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.914554806 +0000 UTC m=+22.331572784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914596 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914613 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914681 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.914661569 +0000 UTC m=+22.331679617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.919699 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.928983 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.937886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939164 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.947876 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.023262 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:50:08 +0000 UTC, rotation deadline is 2026-12-02 15:06:57.896570702 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.023339 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6911h11m48.873235342s for next certificate rotation Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143756 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.227933 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:14:01.04823406 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246089 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246113 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.278557 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.278750 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.302458 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-grnlx"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.302821 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.304540 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.304595 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.305049 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.322614 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.332948 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.340811 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347640 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.354028 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.366013 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.376361 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.386701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.402260 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.413746 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421485 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421528 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.423840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.423868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1ed8cb51a32e4d7ef1dc86e7305df200f375ddb5084e7e7f512d68611ffa84ba"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.425902 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.428166 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.428193 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.436123 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.447307 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.458592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.470955 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.481873 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.497560 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.509392 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520167 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520261 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520525 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.523085 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.535096 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.537146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.547805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551708 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551768 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.560394 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.569637 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.580971 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.596303 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.611039 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.613214 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.639167 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653257 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.666952 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.685657 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.705657 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nhlmt"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.705942 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.706839 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-p9rjv"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.707303 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.707732 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.708136 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709443 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709709 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709923 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.713281 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.714682 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fzwcw"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.715565 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.716521 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718456 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718550 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.732829 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.754987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755057 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.759048 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.770684 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.785866 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.810143 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822202 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822295 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822312 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822362 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822389 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822402 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822415 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822563 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822539347 +0000 UTC m=+24.239557325 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822627 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822677 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822702 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822724 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822739 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822753 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822821 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822836 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822838 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822873 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822878 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822905 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822894967 +0000 UTC m=+24.239913025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822928 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822918408 +0000 UTC m=+24.239936386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822961 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822981 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.823010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.827067 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.844325 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858306 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858662 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.872458 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.886461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.905747 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.921900 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924173 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924458 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924676 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924739 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924805 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924875 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925196 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924710 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924694 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925031 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925111 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925614 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925643 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925691 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.925674943 +0000 UTC m=+24.342692921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925137 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926187 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926318 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926377 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926418 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926087 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926497 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926686 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926749 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926813 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926966 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926993 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927410 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.926662 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927477 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927491 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927535 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.927520354 +0000 UTC m=+24.344538322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927661 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.928022 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.929723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.936097 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.941342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.941529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.958488 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960387 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960699 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.993721 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.018076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nhlmt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.024169 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.026918 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88e25bc5_0b59_4edf_a8f6_1a5a026155c4.slice/crio-a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26 WatchSource:0}: Error finding container a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26: Status 404 returned error can't find the container with id a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26 Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.029229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.035475 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.042368 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84d045f_af00_4d13_be03_8b03ad77f980.slice/crio-97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c WatchSource:0}: Error finding container 97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c: Status 404 returned error can't find the container with id 97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.048769 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57 WatchSource:0}: Error finding container 28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57: Status 404 returned error can't find the container with id 28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57 Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.081364 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.107091 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.108022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.112150 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.126039 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.145248 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165127 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165383 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.185730 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.205296 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.225501 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229590 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:31:44.776835945 +0000 UTC Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229857 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229872 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229886 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230056 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230143 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230165 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230238 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230271 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.245193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267956 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.279328 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.279393 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:10 crc kubenswrapper[4829]: E0217 15:55:10.279455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:10 crc kubenswrapper[4829]: E0217 15:55:10.279523 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.300232 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331147 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331188 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331282 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331298 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331374 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331393 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331401 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331586 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331608 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332093 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332178 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332652 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332742 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332612 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332707 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332594 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332238 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332724 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332476 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.334049 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.341653 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.360770 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369876 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.394348 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.428157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.431397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grnlx" event={"ID":"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf","Type":"ContainerStarted","Data":"d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.431444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grnlx" event={"ID":"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf","Type":"ContainerStarted","Data":"b35c1076d506b65cd7a9130098aa099a5128e53e681618b95f0d118dc6dbc9ca"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432659 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.433925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.433968 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.441637 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.441811 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26"} Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.445048 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad9f982_deda_446c_8801_dc47104eee62.slice/crio-24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e WatchSource:0}: Error finding container 24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e: Status 404 returned error can't find the container with id 24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.457487 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.471955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472292 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.479388 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.513265 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.554689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574252 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574332 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.594886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.634140 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676343 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.677446 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.717561 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.760375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.779056 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.800503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.840565 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882297 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882311 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.884021 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.918797 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.963710 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985540 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985967 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.986149 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.003931 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.089861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.091021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194220 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.230144 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:38:04.729879071 +0000 UTC Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.279170 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.279340 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.296936 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.296984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.400894 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401074 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.458529 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.466869 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54" exitCode=0 Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.467125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473503 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12" exitCode=0 Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473771 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.483503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503858 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.504643 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.530349 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.547747 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.565647 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.584853 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.599727 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607853 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.627242 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.645547 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.668411 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.688824 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.707317 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.726656 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.739726 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.753701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.772289 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.785028 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.794706 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.807250 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.811927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.818464 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.838002 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853473 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853584 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853640 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.853626837 +0000 UTC m=+28.270644815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853699 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853733 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.85372678 +0000 UTC m=+28.270744758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853839 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.853818132 +0000 UTC m=+28.270836110 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.886030 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915244 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915270 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915279 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.922135 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.954259 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.954343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954449 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954471 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954478 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954488 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954494 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954500 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954559 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.954540581 +0000 UTC m=+28.371558559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954593 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.954568512 +0000 UTC m=+28.371586490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.957952 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.004918 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017815 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017853 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.038197 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.119968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120008 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120030 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223220 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223295 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.230678 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:09:17.353789243 +0000 UTC Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.278605 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:12 crc kubenswrapper[4829]: E0217 15:55:12.278789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.278824 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:12 crc kubenswrapper[4829]: E0217 15:55:12.278991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326356 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326396 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429667 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429802 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.479801 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1" exitCode=0 Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.479914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486107 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486224 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486269 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.505006 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.528034 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534711 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534736 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534754 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.547809 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.569862 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.593491 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.612125 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638510 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638974 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638984 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.658651 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.678624 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.695520 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.718817 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742801 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742842 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.744273 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.764230 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846730 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846798 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846861 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053391 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156304 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.231446 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:30:31.156869772 +0000 UTC Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258335 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258413 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.278717 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:13 crc kubenswrapper[4829]: E0217 15:55:13.278863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361433 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464928 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464967 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.492934 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb" exitCode=0 Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.492992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.512850 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.535857 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gbvgd"] Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.536448 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.538273 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.539421 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.539775 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.540211 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.542105 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.564090 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568712 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568758 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.580061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.599347 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.614277 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.632965 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.646888 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.661091 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670618 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670679 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671018 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.680187 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.692436 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.710175 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.736607 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.755626 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771910 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771783 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771996 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773288 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773331 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.774940 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.783854 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.791564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.802052 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.818403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.832958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.845206 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.860925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.862245 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.878875 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.903108 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.918182 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.932074 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.953381 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979838 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.983174 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.003491 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.085947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.085988 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086015 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086044 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188362 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.232238 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:55:45.948537825 +0000 UTC Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.278409 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:14 crc kubenswrapper[4829]: E0217 15:55:14.278546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.279015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:14 crc kubenswrapper[4829]: E0217 15:55:14.279211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394955 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.501512 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571" exitCode=0 Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.501612 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.511501 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.513158 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gbvgd" event={"ID":"71cd8bd1-bb6a-405b-b23d-26c561d126d8","Type":"ContainerStarted","Data":"26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.513205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gbvgd" event={"ID":"71cd8bd1-bb6a-405b-b23d-26c561d126d8","Type":"ContainerStarted","Data":"d5ea150b466124ab69dc34fd9ed80073b57ad7873cf729b51d0a997087244eb8"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.520316 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.536564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.552004 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.564026 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.576549 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.618808 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.659818 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.671095 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.685603 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.697609 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.715692 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719211 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719235 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.727172 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.742043 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.752173 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.764929 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.774194 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.786888 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.796025 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.807618 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.815817 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821125 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.826850 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.835640 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.846713 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.860658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.872599 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.884073 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.898638 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.917296 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923189 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923260 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128402 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128438 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230500 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.232653 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:52:41.793865438 +0000 UTC Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.278914 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.279048 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436517 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.520263 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d" exitCode=0 Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.520333 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539460 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539642 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.540066 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.556915 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.572724 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.581795 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.612662 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.624505 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.641148 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643518 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643668 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.658566 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.679468 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.691777 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.705830 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.718960 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.733745 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747062 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747154 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.749720 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.849554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850275 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.891899 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892122 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892091526 +0000 UTC m=+36.309109534 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.892398 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.892458 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892626 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892669 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892658492 +0000 UTC m=+36.309676560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892766 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892798 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892789016 +0000 UTC m=+36.309806994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952439 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.993864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.994053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994007 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994100 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994113 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994158 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.994144032 +0000 UTC m=+36.411162010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994240 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994259 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994268 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994302 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.994293676 +0000 UTC m=+36.411311654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054926 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054949 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054958 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157926 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.233760 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:09:39.689941749 +0000 UTC Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.278824 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.278923 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:16 crc kubenswrapper[4829]: E0217 15:55:16.278944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:16 crc kubenswrapper[4829]: E0217 15:55:16.279087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.361933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.361990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362006 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362028 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465668 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465712 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.528382 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3" exitCode=0 Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.528433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.544130 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.558434 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569801 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569814 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.577923 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.588436 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.600562 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.611981 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.623688 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.636310 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.646737 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.658719 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.676154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.687098 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.697416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.709165 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774200 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980928 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087711 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191496 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.234791 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:22:01.322823243 +0000 UTC Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.278568 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:17 crc kubenswrapper[4829]: E0217 15:55:17.279151 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295845 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502467 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502654 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.538011 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.544953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545405 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545478 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545504 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.569939 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.584492 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.584727 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.585446 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605697 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605774 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.606514 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.624858 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.639298 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.652520 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.673406 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.686899 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.700871 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707950 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707967 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.708022 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.718859 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.737602 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.754044 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.775545 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.803145 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809994 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.817999 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.831658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.848934 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.864533 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.882199 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.894812 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913421 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913551 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.925272 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.941112 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.959315 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.976214 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.988671 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.006081 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.036161 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119159 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197310 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.215374 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221329 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221387 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.236004 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:35:31.775002292 +0000 UTC Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.242560 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248950 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248980 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.264769 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270110 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.279000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.279013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.279149 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.279424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.288472 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292912 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.296636 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.310194 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.310417 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312594 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312668 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.315866 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.332264 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.352121 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.369425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.391633 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.410012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414999 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.415020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.415034 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.426643 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.443184 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.460640 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.482215 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.504241 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521230 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.526086 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.545037 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728753 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728794 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.936022 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.936040 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039184 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142210 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.236677 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:10:51.115141822 +0000 UTC Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245742 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245760 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245818 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.279176 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:19 crc kubenswrapper[4829]: E0217 15:55:19.279418 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347713 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347753 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450238 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552737 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552807 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.620701 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667513 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667556 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667602 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770413 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.771828 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872889 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975498 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.181931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.237644 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:26:33.051878035 +0000 UTC Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.279358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.279449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:20 crc kubenswrapper[4829]: E0217 15:55:20.279550 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:20 crc kubenswrapper[4829]: E0217 15:55:20.279673 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290965 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394807 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497521 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.558365 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.562922 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" exitCode=1 Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.562990 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.564069 4829 scope.go:117] "RemoveContainer" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.594127 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600226 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600288 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600317 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.617014 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.634926 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.650917 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.670279 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.684519 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703850 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.704924 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.718993 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.735002 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.751427 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.773892 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.794865 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.807002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.807025 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.815154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.848055 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.857161 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910145 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910194 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012238 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012249 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113884 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113915 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.238069 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:39:29.07447338 +0000 UTC Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.278790 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4829]: E0217 15:55:21.278961 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319494 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422187 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.525352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.525722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.526374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.528025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.528180 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.577376 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.581686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.582248 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.601433 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.621793 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632528 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.644127 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.666942 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.682450 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.704271 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.720326 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735603 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735803 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.736108 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.740232 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.758746 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.778412 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.803518 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.827219 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.840001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.840022 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.854480 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.874839 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943219 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943236 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.045987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046079 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046099 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046125 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046143 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149470 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149533 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.239026 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:34:41.961089537 +0000 UTC Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.252825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253329 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253823 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.278980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.279010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.279153 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.279245 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356322 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356363 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458928 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562901 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.589681 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.591635 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596515 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" exitCode=1 Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596622 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596703 4829 scope.go:117] "RemoveContainer" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.597851 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.598173 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.636811 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.657132 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.667021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.667094 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.678423 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.686731 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5"] Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.687378 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.689657 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.689792 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.702846 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.721147 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.743563 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.768335 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.769931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.769994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770050 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770075 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771205 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771600 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771634 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.794339 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.812871 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.829681 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.850280 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.866410 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872485 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872564 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872614 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.874162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.874796 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.881243 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.887921 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.901613 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.902826 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.918823 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.935701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.951034 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.969517 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975443 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.988933 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.005287 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.008718 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.015174 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.030494 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.045184 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.071169 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080258 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.083132 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.096757 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.109549 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.120341 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.130613 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.239318 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:05:31.065230588 +0000 UTC Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.279050 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.279250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286238 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.392974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393556 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.497311 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.497699 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600813 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601130 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"9b8ff1f9d61395f337f02c8e72b0dd2435eda51bb32b697f6493af99b0f8fcf0"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.602630 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.605021 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.605155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.623084 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.636357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.648670 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.665655 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.679216 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.700902 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704200 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.719816 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.740636 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.760302 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.780818 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.803255 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.827608 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.858235 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.879251 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.897114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913390 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.916514 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.939322 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.961689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.977258 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.982966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.983115 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.983158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983254 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983214861 +0000 UTC m=+52.400232879 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983266 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983315 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983368 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983354105 +0000 UTC m=+52.400372113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983487 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983400806 +0000 UTC m=+52.400418814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.993763 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.011062 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015724 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.025830 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.040982 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.061805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.079885 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.083941 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.084021 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084124 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084157 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084161 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084179 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084181 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084240 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084247 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:40.084224777 +0000 UTC m=+52.501242795 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084341 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:40.08431749 +0000 UTC m=+52.501335468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.096180 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.114380 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118656 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118684 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118702 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.128533 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.146234 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.160144 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.193759 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.194504 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.194630 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.210896 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221659 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221724 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.230067 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.240038 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:40:10.460967743 +0000 UTC Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.245386 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.263507 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.276791 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.278513 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.278672 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.278770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.278887 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.285506 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.285611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.295413 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.307227 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.320637 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325043 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325118 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325165 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.336061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.354066 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.372958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.386281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.386384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.386471 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.386567 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:24.886544168 +0000 UTC m=+37.303562146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.387043 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.404266 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.416079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432689 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.443254 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.466236 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.485230 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535982 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.536031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.536042 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742485 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742627 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.846100 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.918194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.918398 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.918514 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:25.918481402 +0000 UTC m=+38.335499420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156617 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.240174 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:36:08.325710653 +0000 UTC Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.259983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260092 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260111 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.279358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.279547 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363297 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363356 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363378 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466274 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466376 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466394 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569645 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569723 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672365 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672387 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672403 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775273 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878199 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.927715 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.927895 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.927973 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:27.927951263 +0000 UTC m=+40.344969251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980411 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980440 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083753 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187199 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187283 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.241318 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:31:46.010139699 +0000 UTC Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.278807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.278959 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279168 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.279247 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394394 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.498005 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601502 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601679 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705446 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809888 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913128 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119530 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223521 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.241652 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:01:19.823191777 +0000 UTC Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.256888 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.278362 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.278548 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.280357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.299539 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.319727 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326980 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.327023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.327041 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.339667 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.360082 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.381142 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.397333 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.417247 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431281 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.433723 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.450985 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.470905 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.491390 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.508548 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.530923 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535892 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.563980 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.584444 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641435 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.744979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745149 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849438 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849712 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.951202 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.951516 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.951655 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:31.951628226 +0000 UTC m=+44.368646234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057230 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160417 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.242795 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:29:49.625230541 +0000 UTC Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263951 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.278408 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.278492 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.279137 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.279500 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.279813 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.280260 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.329958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.343243 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.363502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367881 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.377836 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.392114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.408894 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.425469 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.441943 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.454804 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471644 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474086 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474268 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.477466 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.500915 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.501242 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505436 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.524375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.526686 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530890 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.538704 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.548098 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551958 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.556310 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.574725 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.575705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579828 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579869 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.587061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.591850 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.592071 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594781 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801786 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801909 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801932 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008777 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.009077 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.111956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112069 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112087 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215921 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.243308 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:46:25.710771275 +0000 UTC Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.278340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:29 crc kubenswrapper[4829]: E0217 15:55:29.278549 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319689 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422615 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525600 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628513 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731872 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731995 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.732017 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.836002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.836028 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939764 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043308 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146625 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.244186 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:24:31.194911638 +0000 UTC Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249598 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249656 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278661 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278709 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278737 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.278852 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.278977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.279284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352175 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352231 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459847 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563485 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667259 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667321 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978450 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.080627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.080979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081532 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184997 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.244746 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:28:37.548994436 +0000 UTC Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.278262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:31 crc kubenswrapper[4829]: E0217 15:55:31.278432 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288087 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288132 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.392022 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.392039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495253 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597949 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.598009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.598032 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700881 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700989 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804359 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804395 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.907018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.907039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.999686 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:31.999873 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:31.999987 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.999957261 +0000 UTC m=+52.416975279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009912 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113533 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113646 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217480 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.245873 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:35:17.545244306 +0000 UTC Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.279520 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.279569 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.279766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.280069 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.280852 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.281154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321299 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425416 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425513 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425605 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.528879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.528943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529072 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633180 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736450 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736734 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840460 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944763 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.047916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.047991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048050 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151125 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151182 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.246442 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:16:26.263626287 +0000 UTC Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.253923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254106 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.278491 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:33 crc kubenswrapper[4829]: E0217 15:55:33.278736 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357759 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464883 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567982 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.568030 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.568047 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671597 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775290 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879515 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983432 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983457 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983475 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.087972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088240 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.191951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.191993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192025 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.247287 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:55:28.565877523 +0000 UTC Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278715 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278785 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278898 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.278887 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.278980 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.279092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294516 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294554 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398221 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502173 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605441 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605486 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.708953 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709123 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812485 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915859 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915912 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.018959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122668 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122786 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.225944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226026 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226085 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.247694 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:27:48.229588399 +0000 UTC Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.279022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:35 crc kubenswrapper[4829]: E0217 15:55:35.279164 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328154 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328467 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434310 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536851 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641114 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641229 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.743992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744110 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847496 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950476 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950620 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950644 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053721 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053764 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156152 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156209 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.248681 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:37:45.030157773 +0000 UTC Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259663 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278324 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.278488 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278565 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.279093 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.279207 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.279393 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362702 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362738 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465333 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465391 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568981 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568995 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.659237 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.662330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.662967 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671894 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.672000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.672021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.676233 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.689994 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.702891 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.715987 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.728754 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.743521 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.753985 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.768286 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775317 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775326 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.785645 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.800375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.813162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.839792 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.867428 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877395 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877424 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.884671 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.896428 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.910189 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979129 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081563 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183948 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.249335 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:25:50.856785763 +0000 UTC Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.279112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:37 crc kubenswrapper[4829]: E0217 15:55:37.279264 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286063 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286108 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.388989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389141 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389160 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389694 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492708 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595957 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595977 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.668992 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.669858 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674019 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" exitCode=1 Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674122 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.675806 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:37 crc kubenswrapper[4829]: E0217 15:55:37.677474 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.699871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.699961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700014 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700058 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.701376 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.722010 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.741689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.758894 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.778000 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.796705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.802759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.802945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803246 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803350 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.813090 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.833513 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.852695 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.883004 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.902475 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906529 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906614 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906640 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.922497 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.941088 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.959748 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.981191 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:37.999970 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.009979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010058 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112904 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216605 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.250390 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:18:32.983966754 +0000 UTC Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.278863 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.279044 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.279275 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.279655 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.279977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.280113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.301385 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320314 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.321683 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.339955 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.356502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.374304 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.396182 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.414106 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423826 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.434124 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.459018 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.495551 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.517079 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526938 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.534461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.547163 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.558202 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.569936 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.581393 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.629004 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.680467 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.686297 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.686467 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.720679 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731789 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.740791 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.758000 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.775877 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.793340 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.818042 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835774 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.838568 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.854938 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.870856 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.888154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.903543 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.917208 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.932762 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937981 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.938006 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.951221 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.970561 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.986071 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992686 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.012691 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018455 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.038118 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.043017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.043035 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.063038 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067867 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.086199 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091772 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.110485 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.110880 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112674 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216377 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.251593 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:45:49.658052468 +0000 UTC Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.278958 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.279120 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.319994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320109 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.424108 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.424142 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528107 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528159 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631993 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734720 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838114 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838190 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941231 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988356 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988692 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.988824 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.988901 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.988877606 +0000 UTC m=+84.405895614 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.989245 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.989226965 +0000 UTC m=+84.406244973 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.989719 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.990126 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.990044427 +0000 UTC m=+84.407062435 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044160 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044267 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044319 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090184 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090450 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090502 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090523 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090457 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090622 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090638 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:12.090612962 +0000 UTC m=+84.507630970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090736 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:56.090715885 +0000 UTC m=+68.507733983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090790 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090817 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090837 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090884 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:12.090867619 +0000 UTC m=+84.507885637 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147343 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.229433 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.247360 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.247693 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250713 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.251199 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.251718 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:50:14.984706002 +0000 UTC Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.259417 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.277188 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279408 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279602 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279653 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279765 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.312339 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.329288 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.345685 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354311 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.366725 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.384858 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.398958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.444437 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.462103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.462426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463182 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.472307 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.486840 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.499350 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.508891 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.518503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.530114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566591 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566636 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566675 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669278 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772075 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772139 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.875521 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.875868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876303 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.979522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.979942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980561 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084451 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084519 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187215 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187267 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.252228 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:44:24.331860375 +0000 UTC Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.278862 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:41 crc kubenswrapper[4829]: E0217 15:55:41.279017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290254 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392981 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495277 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495348 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597747 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597787 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700738 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805953 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805967 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013538 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115877 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.116009 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.253106 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:43:11.845140857 +0000 UTC Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278665 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278724 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278747 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.278917 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.279054 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.279170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.425945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426121 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528768 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528792 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528841 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631895 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631917 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838038 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838165 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941505 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941643 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044529 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250719 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.253803 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:20:14.422488803 +0000 UTC Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.279260 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:43 crc kubenswrapper[4829]: E0217 15:55:43.279441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353693 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458200 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458243 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561533 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663946 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.766958 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767088 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869505 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869652 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.972987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076888 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180449 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.254645 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:19:09.033401846 +0000 UTC Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279368 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279466 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279671 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279785 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386147 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386191 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489673 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489861 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594223 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800905 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904655 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904783 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007930 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007946 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007986 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111954 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111995 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215191 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.255705 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:47:46.304036461 +0000 UTC Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.279089 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:45 crc kubenswrapper[4829]: E0217 15:55:45.279279 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317875 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317897 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420446 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420493 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626312 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729551 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729629 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729647 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832007 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935857 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038545 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038669 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038718 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141496 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141697 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244862 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.256286 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:05:42.852187954 +0000 UTC Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279142 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279270 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279344 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279754 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347948 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.450973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451073 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554122 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554199 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554243 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656708 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759682 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759705 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862980 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966707 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966729 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070664 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.173983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174210 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.256454 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:37:03.573425957 +0000 UTC Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.277947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278070 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278257 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:47 crc kubenswrapper[4829]: E0217 15:55:47.278422 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381641 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485314 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588200 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588268 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692078 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692219 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795144 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898877 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898944 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002154 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105164 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105189 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105205 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.207943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208058 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.256662 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:43:25.661975193 +0000 UTC Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279517 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279520 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279527 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.279789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.279946 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.280292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.299137 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310523 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.320948 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.338905 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.356993 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.386455 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.405386 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415331 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415379 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415395 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415434 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.428014 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.443967 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.460935 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.480470 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.502402 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518524 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.523523 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.542376 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.567996 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.600364 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.624502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.645681 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724659 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724755 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827723 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827856 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931447 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931500 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034561 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034645 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138678 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241807 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241886 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.257516 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:12:47.251197412 +0000 UTC Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.278908 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.279062 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.344957 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345122 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.397186 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398658 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398892 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.420152 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425930 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.450870 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456779 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.476950 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.481857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482221 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482497 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.502247 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507776 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.527296 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.527523 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530295 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530320 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530338 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633014 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633566 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633757 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737221 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737318 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840773 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943427 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046819 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149642 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149714 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252911 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252938 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.258114 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:46:59.731259616 +0000 UTC Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278732 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278732 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278804 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279712 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354685 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457295 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457376 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561388 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664519 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664558 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870838 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870991 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.974984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975045 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.077985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078105 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.259414 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:36:54.308851347 +0000 UTC Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.279102 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:51 crc kubenswrapper[4829]: E0217 15:55:51.279859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.280153 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:51 crc kubenswrapper[4829]: E0217 15:55:51.280381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284732 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386869 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489783 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489898 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592379 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592417 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694883 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797722 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002987 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105799 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105825 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207940 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.260566 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:31:14.73664316 +0000 UTC Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279047 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279100 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279562 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309838 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309898 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412598 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514919 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514975 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617497 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617561 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617596 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720742 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720790 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823907 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926168 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926209 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029493 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133461 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236369 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.261432 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:55:48.770283694 +0000 UTC Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.278869 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:53 crc kubenswrapper[4829]: E0217 15:55:53.279094 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.338952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339026 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339062 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441776 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544197 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544292 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646778 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852721 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852880 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955811 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955910 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059117 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059128 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.262528 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:53:53.858339509 +0000 UTC Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264188 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264348 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.278970 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.279037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279088 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.278891 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279251 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279363 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367264 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367314 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367359 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469791 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469893 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.571959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572055 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674692 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.776956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777050 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879564 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879589 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981484 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083280 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186587 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186598 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.263230 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:52:22.726916739 +0000 UTC Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:55 crc kubenswrapper[4829]: E0217 15:55:55.278997 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288996 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.391986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596922 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596943 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699259 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904429 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006999 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.007024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.007061 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109511 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.176478 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.176710 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.176804 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.176774252 +0000 UTC m=+100.593792270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212439 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212479 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.264261 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:27:39.973387979 +0000 UTC Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278875 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.278917 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.279013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.279228 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315086 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315168 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418362 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520432 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520517 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623349 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725792 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725832 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725849 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828407 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828449 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828464 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931909 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931922 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138502 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138757 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.264952 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:20:49.14061922 +0000 UTC Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.279271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:57 crc kubenswrapper[4829]: E0217 15:55:57.279400 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344954 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447298 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447360 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550068 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550112 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653275 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653305 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751419 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751493 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" exitCode=1 Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751543 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.752210 4829 scope.go:117] "RemoveContainer" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.756963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.756989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757026 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.766360 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.780078 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.795823 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.807615 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.819597 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.833139 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.843934 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.855430 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860006 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860081 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860095 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.866343 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.877892 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.888805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.900592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.912691 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.927592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.941215 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.962933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963456 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963775 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.982564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069519 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069687 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.174187 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.265940 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:14:21.885526191 +0000 UTC Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277332 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.278745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.278872 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.279107 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.279201 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.279449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.279551 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.296088 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.306459 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.316831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.328813 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.341299 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.351728 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.367674 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.377467 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380746 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.389848 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.402502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.439007 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.450285 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.459859 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.477911 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483651 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.507323 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.520459 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.537425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586004 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586118 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688411 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688492 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688503 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.756665 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.756728 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.771277 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.783782 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791423 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791465 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791519 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.795834 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.810965 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.833569 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.851461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.864622 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.878012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.891845 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894416 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.906912 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.917645 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.930049 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.948831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.963056 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.977682 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.990521 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.001799 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099554 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202546 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.266668 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:47:34.294881292 +0000 UTC Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.279092 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.279281 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306508 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306566 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408948 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408990 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511877 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586446 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586467 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.606379 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611730 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611814 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.626359 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630262 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.650599 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.657879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658174 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.681835 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686232 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686249 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.702458 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.702875 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705123 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808372 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808477 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911307 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014298 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117587 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117615 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219765 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.267228 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:59:39.171855671 +0000 UTC Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278711 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.278848 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278738 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.278949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.279127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323555 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426551 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426569 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.529937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530075 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530094 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.632996 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736363 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839190 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839232 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941421 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941460 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941482 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044515 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146533 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146597 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146634 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.249991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.267647 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:35:56.95879826 +0000 UTC Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.279054 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:01 crc kubenswrapper[4829]: E0217 15:56:01.279268 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353372 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353388 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353400 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456118 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456128 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456153 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558206 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558284 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661108 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661143 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763851 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867004 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867059 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867088 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969168 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072144 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175177 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.268491 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:58:45.95677128 +0000 UTC Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278194 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278335 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278640 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384157 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384176 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384225 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487878 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597641 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597667 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597702 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597725 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700735 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804003 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804100 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906859 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009407 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009416 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112556 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.216003 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.269191 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:58:47.101659077 +0000 UTC Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.278795 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:03 crc kubenswrapper[4829]: E0217 15:56:03.278991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318883 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421371 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523558 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523677 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625905 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625936 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729531 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833422 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040663 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040686 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040703 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143848 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.246945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.246996 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247053 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.269771 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:42:29.346090317 +0000 UTC Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278369 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.278517 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.278822 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.279039 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350480 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453629 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557596 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557656 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660916 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764707 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764743 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764775 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868270 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971566 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971743 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971798 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178790 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.270043 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:11:12.855564434 +0000 UTC Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.278514 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:05 crc kubenswrapper[4829]: E0217 15:56:05.278640 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281195 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281226 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281254 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.384959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385129 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488414 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.590955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591101 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693700 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796445 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899881 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899924 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003322 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003344 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.209972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210087 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210129 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.270472 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:43:16.270632701 +0000 UTC Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.279076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.279949 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.280110 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.280255 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.280690 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.281286 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.281453 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.312937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313028 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313121 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416308 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.519998 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520043 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520083 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623120 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726826 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.785952 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.789103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.789736 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.806539 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.822331 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829919 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829934 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.844797 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.860506 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.881072 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.899086 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.917782 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.931972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932052 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.934377 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.954136 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.979679 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.996235 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.011416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.030663 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036336 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.056705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.072325 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.090615 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.107195 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139498 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.242891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243237 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.271049 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:07:23.996740063 +0000 UTC Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.278392 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:07 crc kubenswrapper[4829]: E0217 15:56:07.278638 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345792 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449188 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552589 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656278 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.758998 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759134 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.795498 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.796408 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800118 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" exitCode=1 Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800239 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.801184 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:07 crc kubenswrapper[4829]: E0217 15:56:07.801438 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.833772 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.860367 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861441 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.878773 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.895756 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.912721 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.927734 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.949556 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964665 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964812 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.967788 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.990785 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.011543 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.030826 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.046408 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.061012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067240 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067392 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.077856 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.098373 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.115563 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.134741 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170687 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170723 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.272082 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:53:45.67667506 +0000 UTC Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274799 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279298 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279394 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279654 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279811 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.300668 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.318377 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.339162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.359391 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.376335 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.396708 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.414416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.427156 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.444726 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.461809 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.477793 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480891 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.493486 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.514737 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.531812 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.548764 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.567847 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584246 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584384 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.587806 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687556 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687597 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687636 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791673 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.807364 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.813434 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.813920 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.834989 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.849959 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.862334 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.884621 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894712 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.895027 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.900167 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.913886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.925907 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.936658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.953179 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.962400 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.972226 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.989011 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006870 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.007146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.007272 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.008130 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.021206 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.037181 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.052403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.076162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111177 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111251 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111261 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214612 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214673 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.272549 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:36:28.910815691 +0000 UTC Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.278980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.279157 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317623 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421089 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627620 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627711 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627752 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730326 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730366 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834281 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834382 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847407 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.861193 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.875731 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878759 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.888621 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892591 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.909523 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913536 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913581 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913596 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.929557 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.929900 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936768 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936816 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.038997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039059 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142318 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245228 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.273645 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:01:14.850278956 +0000 UTC Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.279295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279442 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279469 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.279889 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.280229 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.294851 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348359 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450962 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553253 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553264 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553297 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655475 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655484 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860604 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963296 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065708 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168798 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168904 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168921 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271253 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.274425 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:09:15.617018177 +0000 UTC Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:11 crc kubenswrapper[4829]: E0217 15:56:11.279000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373977 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.374003 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.374021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477620 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580534 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683069 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683092 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786153 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888201 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991439 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991508 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.057813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058027 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.057988318 +0000 UTC m=+148.475006326 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.058165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.058225 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058374 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058396 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058468 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.058454089 +0000 UTC m=+148.475472097 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058530 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.05847919 +0000 UTC m=+148.475497208 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094769 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094831 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.160128 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.160242 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160431 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160473 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160472 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160485 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160506 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160524 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160555 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.160536856 +0000 UTC m=+148.577554834 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160631 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.160602318 +0000 UTC m=+148.577620326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.274568 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:24:28.213750051 +0000 UTC Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279232 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279328 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279462 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279623 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279774 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301139 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403655 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403672 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506394 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506449 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506505 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609799 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609955 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713267 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816990 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919832 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919891 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022838 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126906 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126992 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230268 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230320 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.275634 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:24:28.998641188 +0000 UTC Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.279000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:13 crc kubenswrapper[4829]: E0217 15:56:13.279185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334265 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334287 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437815 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643433 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643448 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.746964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747082 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.849968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850093 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953836 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057219 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057379 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265707 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.275857 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:21:45.806735462 +0000 UTC Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279218 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279276 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279502 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279731 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279856 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368435 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368475 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471435 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471492 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471510 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471549 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574698 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678747 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678793 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781611 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987496 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987562 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090942 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193918 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193978 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.276899 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:10:43.830143755 +0000 UTC Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.279214 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:15 crc kubenswrapper[4829]: E0217 15:56:15.279382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296230 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296361 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399527 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501859 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.604961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605092 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605139 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605157 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708196 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811164 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811275 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.916009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.916021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019100 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019111 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.121978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122083 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224424 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.277239 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:36:46.063137704 +0000 UTC Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278604 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.278777 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278847 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.278951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.279066 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327263 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327286 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430147 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430297 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532946 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532961 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636444 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636830 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739860 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.842012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.842029 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945116 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945136 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150295 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.251978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252011 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252041 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.277611 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:10:09.00921764 +0000 UTC Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.278976 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:17 crc kubenswrapper[4829]: E0217 15:56:17.279304 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.298209 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355721 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.458941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459082 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562975 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666531 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666660 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.770888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.770976 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874333 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874444 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874504 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978251 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978273 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081465 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081510 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184936 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184953 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278654 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:39:40.198448928 +0000 UTC Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278803 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278933 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279063 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.279123 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279349 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279465 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.293986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.313115 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.313032474 podStartE2EDuration="38.313032474s" podCreationTimestamp="2026-02-17 15:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.310521392 +0000 UTC m=+90.727539380" watchObservedRunningTime="2026-02-17 15:56:18.313032474 +0000 UTC m=+90.730050492" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.367416 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nhlmt" podStartSLOduration=69.367358644 podStartE2EDuration="1m9.367358644s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.367069517 +0000 UTC m=+90.784087535" watchObservedRunningTime="2026-02-17 15:56:18.367358644 +0000 UTC m=+90.784376652" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.368067 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-grnlx" podStartSLOduration=71.36805664 podStartE2EDuration="1m11.36805664s" podCreationTimestamp="2026-02-17 15:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.345936076 +0000 UTC m=+90.762954084" watchObservedRunningTime="2026-02-17 15:56:18.36805664 +0000 UTC m=+90.785074658" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.383705 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gbvgd" podStartSLOduration=70.383684037 podStartE2EDuration="1m10.383684037s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.383058531 +0000 UTC m=+90.800076539" watchObservedRunningTime="2026-02-17 15:56:18.383684037 +0000 UTC m=+90.800702045" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396612 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499911 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499946 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.551688 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" podStartSLOduration=69.551665708 podStartE2EDuration="1m9.551665708s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.528270681 +0000 UTC m=+90.945288679" watchObservedRunningTime="2026-02-17 15:56:18.551665708 +0000 UTC m=+90.968683696" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.590191 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.590171907 podStartE2EDuration="1.590171907s" podCreationTimestamp="2026-02-17 15:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.562800613 +0000 UTC m=+90.979818601" watchObservedRunningTime="2026-02-17 15:56:18.590171907 +0000 UTC m=+91.007189895" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601845 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.610529 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.610510559 podStartE2EDuration="1m10.610510559s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.608743835 +0000 UTC m=+91.025761823" watchObservedRunningTime="2026-02-17 15:56:18.610510559 +0000 UTC m=+91.027528547" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.611205 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.611200206 podStartE2EDuration="8.611200206s" podCreationTimestamp="2026-02-17 15:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.58943041 +0000 UTC m=+91.006448398" watchObservedRunningTime="2026-02-17 15:56:18.611200206 +0000 UTC m=+91.028218194" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.625169 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" podStartSLOduration=69.62515375 podStartE2EDuration="1m9.62515375s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.625078598 +0000 UTC m=+91.042096616" watchObservedRunningTime="2026-02-17 15:56:18.62515375 +0000 UTC m=+91.042171738" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.645710 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.645689067 podStartE2EDuration="1m10.645689067s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.643986075 +0000 UTC m=+91.061004093" watchObservedRunningTime="2026-02-17 15:56:18.645689067 +0000 UTC m=+91.062707085" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.677220 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podStartSLOduration=69.677199563 podStartE2EDuration="1m9.677199563s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.676511607 +0000 UTC m=+91.093529625" watchObservedRunningTime="2026-02-17 15:56:18.677199563 +0000 UTC m=+91.094217581" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704741 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807952 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117642 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117772 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.221013 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.221033 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.278893 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:26:07.337649599 +0000 UTC Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.279158 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:19 crc kubenswrapper[4829]: E0217 15:56:19.279618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.323976 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324058 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324101 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.429005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.429029 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532528 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636452 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843457 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843481 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946749 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946802 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049683 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049727 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:20Z","lastTransitionTime":"2026-02-17T15:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.113010 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:20Z","lastTransitionTime":"2026-02-17T15:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.179087 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6"] Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.179680 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.182672 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.182953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.183499 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.184507 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260837 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260944 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260979 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.261012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279145 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:55:36.60181095 +0000 UTC Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279203 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279338 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279416 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279471 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.279642 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.279934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.280047 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.289765 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363120 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363273 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363338 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.365409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.375497 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.399211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.508598 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: W0217 15:56:20.542927 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8cdf0f_945d_4110_9a3c_0c9aa337ae6b.slice/crio-528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9 WatchSource:0}: Error finding container 528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9: Status 404 returned error can't find the container with id 528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9 Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.856506 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" event={"ID":"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b","Type":"ContainerStarted","Data":"87e211cb02d5fa35f00618453223aa1f786622d3e8c1a06d7bea493776bce94d"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.856641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" event={"ID":"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b","Type":"ContainerStarted","Data":"528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.878842 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" podStartSLOduration=72.878767976 podStartE2EDuration="1m12.878767976s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:20.876029238 +0000 UTC m=+93.293047276" watchObservedRunningTime="2026-02-17 15:56:20.878767976 +0000 UTC m=+93.295786004" Feb 17 15:56:21 crc kubenswrapper[4829]: I0217 15:56:21.278908 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:21 crc kubenswrapper[4829]: E0217 15:56:21.279298 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278472 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278508 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278954 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:23 crc kubenswrapper[4829]: I0217 15:56:23.278902 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:23 crc kubenswrapper[4829]: E0217 15:56:23.279515 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:23 crc kubenswrapper[4829]: I0217 15:56:23.279982 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:23 crc kubenswrapper[4829]: E0217 15:56:23.280273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279143 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279169 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279252 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279694 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:25 crc kubenswrapper[4829]: I0217 15:56:25.278476 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4829]: E0217 15:56:25.278926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.278991 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.279203 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.279204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.279334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.280156 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.280459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:27 crc kubenswrapper[4829]: I0217 15:56:27.279239 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:27 crc kubenswrapper[4829]: E0217 15:56:27.279425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.257356 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.257772 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.257876 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:57:32.257844537 +0000 UTC m=+164.674862555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.278523 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.278649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.278705 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.278834 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.279193 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.280866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:29 crc kubenswrapper[4829]: I0217 15:56:29.279015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:29 crc kubenswrapper[4829]: E0217 15:56:29.279178 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.278729 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.278882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279097 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.279136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:31 crc kubenswrapper[4829]: I0217 15:56:31.278831 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:31 crc kubenswrapper[4829]: E0217 15:56:31.279016 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.278859 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279065 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.278888 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.279155 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279485 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:33 crc kubenswrapper[4829]: I0217 15:56:33.278805 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:33 crc kubenswrapper[4829]: E0217 15:56:33.279128 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.278743 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.278816 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.278949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.279028 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.279288 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.279423 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.280691 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.280999 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:35 crc kubenswrapper[4829]: I0217 15:56:35.278633 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:35 crc kubenswrapper[4829]: E0217 15:56:35.278793 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279264 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279345 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279463 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279707 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279843 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:37 crc kubenswrapper[4829]: I0217 15:56:37.278615 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:37 crc kubenswrapper[4829]: E0217 15:56:37.278806 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.278855 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.279434 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283617 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283987 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:39 crc kubenswrapper[4829]: I0217 15:56:39.278690 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:39 crc kubenswrapper[4829]: E0217 15:56:39.278893 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278455 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278518 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278553 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278820 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278953 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:41 crc kubenswrapper[4829]: I0217 15:56:41.278610 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:41 crc kubenswrapper[4829]: E0217 15:56:41.278783 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.278952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.279715 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.279186 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.280030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.279086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.280302 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.278933 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:43 crc kubenswrapper[4829]: E0217 15:56:43.279375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951027 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951934 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951995 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" exitCode=1 Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7"} Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952082 4829 scope.go:117] "RemoveContainer" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952691 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 15:56:43 crc kubenswrapper[4829]: E0217 15:56:43.952989 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279231 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279972 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279818 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.957323 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:45 crc kubenswrapper[4829]: I0217 15:56:45.278708 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:45 crc kubenswrapper[4829]: E0217 15:56:45.278955 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279087 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279131 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279454 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279569 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:47 crc kubenswrapper[4829]: I0217 15:56:47.279192 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:47 crc kubenswrapper[4829]: E0217 15:56:47.279368 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.279053 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.279286 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.281136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281530 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281650 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.282933 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.287352 4829 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.403866 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.973928 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.976915 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6"} Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.977784 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.018985 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podStartSLOduration=100.018968601 podStartE2EDuration="1m40.018968601s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:49.016512554 +0000 UTC m=+121.433530532" watchObservedRunningTime="2026-02-17 15:56:49.018968601 +0000 UTC m=+121.435986579" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.278984 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:49 crc kubenswrapper[4829]: E0217 15:56:49.279155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.297024 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.297148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:49 crc kubenswrapper[4829]: E0217 15:56:49.297247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:50 crc kubenswrapper[4829]: I0217 15:56:50.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:50 crc kubenswrapper[4829]: E0217 15:56:50.279305 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:50 crc kubenswrapper[4829]: I0217 15:56:50.279627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:50 crc kubenswrapper[4829]: E0217 15:56:50.279739 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:51 crc kubenswrapper[4829]: I0217 15:56:51.278276 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:51 crc kubenswrapper[4829]: I0217 15:56:51.278295 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:51 crc kubenswrapper[4829]: E0217 15:56:51.278410 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:51 crc kubenswrapper[4829]: E0217 15:56:51.278829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:52 crc kubenswrapper[4829]: I0217 15:56:52.278701 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:52 crc kubenswrapper[4829]: I0217 15:56:52.278718 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:52 crc kubenswrapper[4829]: E0217 15:56:52.279109 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:52 crc kubenswrapper[4829]: E0217 15:56:52.278965 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:53 crc kubenswrapper[4829]: I0217 15:56:53.279004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:53 crc kubenswrapper[4829]: I0217 15:56:53.279033 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.279200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.279322 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.405917 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:54 crc kubenswrapper[4829]: I0217 15:56:54.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:54 crc kubenswrapper[4829]: E0217 15:56:54.279150 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:54 crc kubenswrapper[4829]: I0217 15:56:54.279191 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:54 crc kubenswrapper[4829]: E0217 15:56:54.279323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:55 crc kubenswrapper[4829]: I0217 15:56:55.278503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:55 crc kubenswrapper[4829]: E0217 15:56:55.278749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:55 crc kubenswrapper[4829]: I0217 15:56:55.279093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:55 crc kubenswrapper[4829]: E0217 15:56:55.279218 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:56 crc kubenswrapper[4829]: I0217 15:56:56.279173 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:56 crc kubenswrapper[4829]: I0217 15:56:56.279269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:56 crc kubenswrapper[4829]: E0217 15:56:56.279326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:56 crc kubenswrapper[4829]: E0217 15:56:56.279478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279266 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:57 crc kubenswrapper[4829]: E0217 15:56:57.279481 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279713 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 15:56:57 crc kubenswrapper[4829]: E0217 15:56:57.279696 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.013407 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.013830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27"} Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.278516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.278616 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.280307 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.280533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.406295 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:59 crc kubenswrapper[4829]: I0217 15:56:59.278726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:59 crc kubenswrapper[4829]: I0217 15:56:59.278726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:59 crc kubenswrapper[4829]: E0217 15:56:59.278898 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:59 crc kubenswrapper[4829]: E0217 15:56:59.279028 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:00 crc kubenswrapper[4829]: I0217 15:57:00.279164 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:00 crc kubenswrapper[4829]: E0217 15:57:00.279338 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:57:00 crc kubenswrapper[4829]: I0217 15:57:00.279439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:00 crc kubenswrapper[4829]: E0217 15:57:00.279698 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:57:01 crc kubenswrapper[4829]: I0217 15:57:01.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:01 crc kubenswrapper[4829]: I0217 15:57:01.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:01 crc kubenswrapper[4829]: E0217 15:57:01.279139 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:57:01 crc kubenswrapper[4829]: E0217 15:57:01.279275 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:02 crc kubenswrapper[4829]: I0217 15:57:02.278506 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:02 crc kubenswrapper[4829]: I0217 15:57:02.278604 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:02 crc kubenswrapper[4829]: E0217 15:57:02.278773 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:57:02 crc kubenswrapper[4829]: E0217 15:57:02.279113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:57:03 crc kubenswrapper[4829]: I0217 15:57:03.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:03 crc kubenswrapper[4829]: I0217 15:57:03.278938 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:03 crc kubenswrapper[4829]: E0217 15:57:03.279108 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:03 crc kubenswrapper[4829]: E0217 15:57:03.279238 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.278712 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.278853 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282196 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282234 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282732 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282903 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.279064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.279081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.282329 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.282349 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.530339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.580856 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.581815 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.583215 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.583990 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.584121 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.591698 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.594211 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.594614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.595189 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615014 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615168 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615682 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616145 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616406 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616822 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.617209 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.619900 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.619960 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620119 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620261 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620270 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620309 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620357 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620380 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620481 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620549 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620712 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.621219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.621374 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622380 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622794 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.623254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.624951 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.625280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.625866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.626236 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.626729 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.632075 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.632346 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633530 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633753 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633843 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633949 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634100 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634170 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634239 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634378 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634624 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634736 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634793 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634744 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634873 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635302 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635458 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.637900 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638118 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638433 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638632 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638766 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638983 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.639640 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645106 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645466 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645561 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.646262 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.646919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647149 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647299 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647360 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647596 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648220 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648428 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648638 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.649949 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.650526 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651097 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651184 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651778 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.652254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.661182 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.661611 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672051 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672116 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672048 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672376 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672926 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.673806 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.678795 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.679357 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.688828 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.688970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.689322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.689968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.690606 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.691054 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692783 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692872 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692894 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692919 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692980 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693002 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693053 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693147 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693171 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693193 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693335 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693439 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693680 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693932 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.694092 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.694615 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.696735 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.698299 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.698633 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.700018 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.700406 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.701182 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.701554 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.713331 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.713684 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714043 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714406 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5rwbn"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714641 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714764 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715006 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715111 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715274 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715447 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715739 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715798 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715870 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715980 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716090 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716474 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716514 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716963 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717070 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717201 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717299 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717403 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717444 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.719447 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720003 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720084 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724022 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724202 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724439 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724477 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724658 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724896 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.726026 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.726934 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.727504 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728078 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728760 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728808 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.729281 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.736246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.736519 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.742651 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.752752 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753131 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753803 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753887 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.755794 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.767862 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.768463 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.768760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.769937 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.773176 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.774795 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.775465 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.775915 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.776374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.776439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.778261 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.786002 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.789905 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.790093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.790871 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.791358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.791687 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.792311 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.792512 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793801 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793820 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793932 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793974 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794001 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794064 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794086 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794109 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794129 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794160 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5x4hf"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794765 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794990 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795198 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794175 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795600 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795607 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795736 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795772 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.796963 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.796968 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.797345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.800078 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.800715 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.808226 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.808451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.811608 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812432 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.814305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.815422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.816318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.817626 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.818280 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.820014 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.821338 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.822595 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.825310 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.825733 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.826898 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.827667 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.832514 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.832814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.834566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.836922 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.838540 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.839394 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.840609 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.841335 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.846178 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.847084 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.847529 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.849478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.855547 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.859633 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.862326 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.864788 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.866218 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.866252 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.867704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.869100 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.870559 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.872023 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.873393 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.876366 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.877655 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.879319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.880407 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.881848 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.883229 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.884612 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.886055 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.886309 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.887077 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.887105 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.888253 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.919023 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.927070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.946414 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.966143 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.986438 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.006449 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.026146 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.046402 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.067164 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.086993 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.106470 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.126853 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.146378 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.166408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.186990 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.206278 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.227134 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.247120 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.275845 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.286543 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.306689 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.326669 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.346614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.366651 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.387391 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.406477 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.427032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.447545 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.466836 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.486969 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.506362 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.527102 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.546089 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.566824 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.586256 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.606946 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.626941 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.645444 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.666189 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705170 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705381 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705426 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705562 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705650 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705690 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705810 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706528 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706606 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707017 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707120 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707183 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707215 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707360 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.707727 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.207709478 +0000 UTC m=+144.624727576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708041 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708186 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708223 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708493 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708547 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708608 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708686 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708719 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708784 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708820 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708864 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708955 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709114 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709410 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709688 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709900 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709982 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710052 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710082 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710112 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710140 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710187 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710287 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710331 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710368 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710448 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710514 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710557 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710796 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710891 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711287 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711414 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711476 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711520 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711563 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711714 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711813 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711843 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711888 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711918 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711976 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.712019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.726841 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.743455 4829 request.go:700] Waited for 1.014777653s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.745717 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.766306 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.786400 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.805898 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813071 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.813241 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.313199769 +0000 UTC m=+144.730217787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813351 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813436 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813470 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813550 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813689 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813774 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813923 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813971 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814061 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814489 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814715 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814790 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814857 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814929 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814960 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815023 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815046 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815080 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815114 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815155 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815245 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815321 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815356 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815358 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815475 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815518 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815562 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815644 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815804 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815878 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816022 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816101 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816136 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816237 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816265 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816426 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816468 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816571 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816637 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816725 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816809 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816876 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816911 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816976 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.817008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.817061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818780 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819698 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819890 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819943 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819989 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820092 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820142 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820347 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820397 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820498 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821296 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.822091 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.822165 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824062 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824524 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824615 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824742 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824854 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824904 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824974 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825030 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825082 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825179 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825260 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825341 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825379 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825416 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825918 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.826197 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.826434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.827896 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.827902 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.828750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.829696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.830259 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831743 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831905 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831976 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832036 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832249 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832302 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832352 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832401 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832450 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.833490 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.834293 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.332710838 +0000 UTC m=+144.749728846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834632 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834981 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835074 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835788 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836226 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836305 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.837421 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838115 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838148 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838664 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.839258 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.840254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841818 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.842848 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.842940 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.843481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.843953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.844315 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.844913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845868 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846117 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846389 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846884 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846954 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847367 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847425 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847658 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.848093 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.848804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.849016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.849123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.850163 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.851085 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.851171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.852386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.852791 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.854262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.865902 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.885908 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.905880 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.925116 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.939429 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.939756 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.439673378 +0000 UTC m=+144.856691386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.939901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940102 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940242 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940277 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940329 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940426 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940464 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.940654 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.440627704 +0000 UTC m=+144.857645712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940870 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940977 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941037 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941130 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941162 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.944364 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.944442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945431 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946145 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946346 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946506 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946842 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946882 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946975 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947047 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947124 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947344 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947477 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947698 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947764 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947874 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.948684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.948891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.949510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.949688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957487 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.961196 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.969202 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.985939 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.991041 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.005370 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.025173 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.045247 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.049462 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.049847 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.549818024 +0000 UTC m=+144.966836042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.050378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.051039 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.551006847 +0000 UTC m=+144.968024875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.057659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.065372 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.085413 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.106394 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.126312 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.139489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.146522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.152048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.152249 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.65222054 +0000 UTC m=+145.069238558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.153037 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.153652 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.653628959 +0000 UTC m=+145.070646967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.156641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.165731 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.185212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.204953 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.210219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.225810 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.236206 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.246542 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.254751 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.254998 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.754966846 +0000 UTC m=+145.171984864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.255561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.256139 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.756110237 +0000 UTC m=+145.173128265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.266600 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.270848 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.286089 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.306156 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.326193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.347282 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.353649 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.356559 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.356794 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.856771666 +0000 UTC m=+145.273789684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.357199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.357668 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.85764495 +0000 UTC m=+145.274662968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.376247 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.382189 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.385256 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.405916 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.414760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.426163 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.445746 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.452463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.452763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.461623 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.461745 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.961714451 +0000 UTC m=+145.378732469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.462521 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.463020 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.963000956 +0000 UTC m=+145.380018954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.485408 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.501403 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.505999 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.515323 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.525290 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.535468 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.564306 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.564424 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.064396385 +0000 UTC m=+145.481414403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.564894 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.565202 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.065194007 +0000 UTC m=+145.482211985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.566317 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.571320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.585258 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.590342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.605935 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.626625 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.636885 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.645458 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.665062 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.666731 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.667136 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.16710048 +0000 UTC m=+145.584118508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.668088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.668644 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.168617831 +0000 UTC m=+145.585635849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.670186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.685919 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.705099 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.725829 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.735006 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.744134 4829 request.go:700] Waited for 1.856815812s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.746814 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.761016 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.765810 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.769126 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.769371 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.269326121 +0000 UTC m=+145.686344159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.770165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.770739 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.270712579 +0000 UTC m=+145.687730667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.792995 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.836530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.861778 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.872225 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.872464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.372434486 +0000 UTC m=+145.789452474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.872711 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.873272 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.373250549 +0000 UTC m=+145.790268537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.879313 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.892992 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.915044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.936298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.951913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.964996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.974406 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.974905 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.474890934 +0000 UTC m=+145.891908912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.980182 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.996797 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.997074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.014809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.036227 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a717f8_3264_4540_b132_ab42accb57f0.slice/crio-5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af WatchSource:0}: Error finding container 5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af: Status 404 returned error can't find the container with id 5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.036331 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.040273 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.058426 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.067923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.076043 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.076464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.576445487 +0000 UTC m=+145.993463465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.079231 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.084962 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.089019 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rwbn" event={"ID":"a5a717f8-3264-4540-b132-ab42accb57f0","Type":"ContainerStarted","Data":"5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af"} Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.103268 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.115210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.119683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.141015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.145967 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.163187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.177454 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.178303 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.179180 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.679134651 +0000 UTC m=+146.096152629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.184326 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.186561 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.199700 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.201159 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.222958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.239239 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.247263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.249452 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.254000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.267499 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.267516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.271265 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.281518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.281879 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.781864577 +0000 UTC m=+146.198882555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.284502 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.289865 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302268 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302744 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.303678 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.316483 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.317153 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67525a8a_c8e8_469c_a60d_1676ac5b057e.slice/crio-d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61 WatchSource:0}: Error finding container d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61: Status 404 returned error can't find the container with id d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61 Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.321701 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.333425 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.333609 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.334604 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f36b68_dd7a_41a7_86ff_ebcf90897710.slice/crio-aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57 WatchSource:0}: Error finding container aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57: Status 404 returned error can't find the container with id aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57 Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.340769 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.343442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.349789 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.358602 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.360099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.365941 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.387991 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.388681 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.888665152 +0000 UTC m=+146.305683130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.390894 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.391118 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.398291 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.421977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.429298 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.436032 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.438125 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.446061 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.448136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.464299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.471385 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.477929 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.480913 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.489510 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.489937 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.989896086 +0000 UTC m=+146.406914064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.493496 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.503131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.514932 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.523514 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.534644 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.538617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.561922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.581614 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.582908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b184f73_7f44_4ddb_b344_a5a635501c7d.slice/crio-5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f WatchSource:0}: Error finding container 5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f: Status 404 returned error can't find the container with id 5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.590423 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.590889 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.090868564 +0000 UTC m=+146.507886542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.603762 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.622183 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.638272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.653212 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.676373 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.682827 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.683225 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.696413 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.697834 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.197817234 +0000 UTC m=+146.614835212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.699539 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.711773 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.715966 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728041 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728142 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728529 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.729150 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.736108 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.752558 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.762428 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.797639 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.798003 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.297987269 +0000 UTC m=+146.715005237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.804801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.899481 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.899806 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.39979325 +0000 UTC m=+146.816811228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.915464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.961272 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.981242 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.010929 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.011299 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.511282673 +0000 UTC m=+146.928300651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.025540 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.055466 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a4515e_e65a_4069_bcfe_d84494a724cd.slice/crio-978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349 WatchSource:0}: Error finding container 978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349: Status 404 returned error can't find the container with id 978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349 Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.093870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"566daf6ef97a21afbb106de727058b5ec5000fee0dfa3b6a1036b5c171adcbe9"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.094992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" event={"ID":"67525a8a-c8e8-469c-a60d-1676ac5b057e","Type":"ContainerStarted","Data":"d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.096646 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"603fbe2bbf17c826cfad591ff76754f7ecaa69aaf747d706366365ecc1add41d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.096667 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"316dd5f02c346c16ef62cf763a938e846701064a20818af5bda732cce8e72df1"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.101262 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.102876 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.106974 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5x4hf" event={"ID":"c67dea52-b0b7-4b48-80e1-54d9754487ed","Type":"ContainerStarted","Data":"42f71487e6c9416d650fb9479378cc5eafc93ef527535d64bc2f9be928c2e21b"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.108753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerStarted","Data":"6a23ac3a0952fee762d7b612b6d50abf950d5b8d2ac6689a55a814e3e26c2a02"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.109956 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerStarted","Data":"a4dd5884310a79cb7487b5f3cbe05eafb8d2a2c5440edad3ee0322f1cc8a15db"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.111056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"eed481f7d9690d5cd33c3bebacd3a1a1dad55b78483672ccb89eb85c02c576ac"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.112027 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.112354 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.612343622 +0000 UTC m=+147.029361600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.119926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerStarted","Data":"543bcf505a6976b4cac43a8840910c402bdffc26734b407176ab019a3047a028"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.122042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rwbn" event={"ID":"a5a717f8-3264-4540-b132-ab42accb57f0","Type":"ContainerStarted","Data":"4dc7f3d9fbd69c6b3bc32848725cef8ee9c30f51518454b3233f7773a7d7124d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.124032 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2sdwc" event={"ID":"f73ce613-5317-4f8e-82c9-4af380ed614c","Type":"ContainerStarted","Data":"4f581f5407a6a10e129097935adf47fd9662a2d23b30d8744f71fa374c086d98"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126182 4829 generic.go:334] "Generic (PLEG): container finished" podID="8bea1514-e813-4a49-80fb-cb8de9827a40" containerID="7e949a1d2aec2e7d5eedff72e200761ca5a220197097ae30241195a97cb781de" exitCode=0 Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126258 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerDied","Data":"7e949a1d2aec2e7d5eedff72e200761ca5a220197097ae30241195a97cb781de"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"986bf2e4716199b7eac93016c0621eb1eebd1297e66326732489ae500ece8e31"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.136052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" event={"ID":"2b184f73-7f44-4ddb-b344-a5a635501c7d","Type":"ContainerStarted","Data":"5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.137369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" event={"ID":"6410fb51-b781-4989-ba46-c7c6b189188b","Type":"ContainerStarted","Data":"7b15d8bc2751bd8736b3944e22fd70049f55782f76acc7c7bd4cd02aec3f909d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.137411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" event={"ID":"6410fb51-b781-4989-ba46-c7c6b189188b","Type":"ContainerStarted","Data":"0cfce608f42d4974b1b6247e7a23a286e416801b3399c689c92146a376e0ffa2"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.138977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq9th" event={"ID":"90ed6518-2fbf-4aa0-b136-d605a9cb972a","Type":"ContainerStarted","Data":"527719d05c26405f4f5254bcac7772cc42df0d531a22d05e6cb2bd21a5c61a4f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.139002 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq9th" event={"ID":"90ed6518-2fbf-4aa0-b136-d605a9cb972a","Type":"ContainerStarted","Data":"6d63051986b02c5b3c19ad353aa74e0dfd6e12ac87e0899288bc275c04f0c22f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.139171 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.140097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" event={"ID":"44a4515e-e65a-4069-bcfe-d84494a724cd","Type":"ContainerStarted","Data":"978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.140764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" event={"ID":"546891ca-dff6-4af9-a495-8bdd561e4233","Type":"ContainerStarted","Data":"9affd3ab68fd7b3c20e771fcd2f9967cee71ac3b87ea7c8b798d4dcf33912d21"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.141101 4829 patch_prober.go:28] interesting pod/console-operator-58897d9998-fq9th container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.141134 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fq9th" podUID="90ed6518-2fbf-4aa0-b136-d605a9cb972a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.212777 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.213785 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.713764972 +0000 UTC m=+147.130782950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.264879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.271790 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.277779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.314334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.315166 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.815151761 +0000 UTC m=+147.232169739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.324748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.402751 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffa6856_9b00_44e9_81c6_643defb47c04.slice/crio-73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa WatchSource:0}: Error finding container 73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa: Status 404 returned error can't find the container with id 73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.403978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.415045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.415314 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.915289065 +0000 UTC m=+147.332307043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.454636 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5rwbn" podStartSLOduration=125.454616381 podStartE2EDuration="2m5.454616381s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:14.453225234 +0000 UTC m=+146.870243212" watchObservedRunningTime="2026-02-17 15:57:14.454616381 +0000 UTC m=+146.871634359" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.517131 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.518971 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.018952496 +0000 UTC m=+147.435970464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.618035 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.618190 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.118167286 +0000 UTC m=+147.535185264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.618510 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.619476 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.119462301 +0000 UTC m=+147.536480279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.687393 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.693920 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.713722 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.715954 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.719913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.720191 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.220176901 +0000 UTC m=+147.637194879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.799876 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a11950_91e2_4d36_9d60_341b9a6b21b2.slice/crio-2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082 WatchSource:0}: Error finding container 2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082: Status 404 returned error can't find the container with id 2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082 Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.802742 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16271aa7_2602_467c_b9aa_31c491952eb8.slice/crio-8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a WatchSource:0}: Error finding container 8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a: Status 404 returned error can't find the container with id 8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.821688 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.822050 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.322025123 +0000 UTC m=+147.739043101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.877627 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.925950 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.926429 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.426415553 +0000 UTC m=+147.843433531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.967305 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.972215 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.984047 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.993106 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.996698 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.998988 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.004960 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.006810 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:15 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:15 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:15 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.006858 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.028290 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.028988 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.528957562 +0000 UTC m=+147.945975540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.030243 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bfb2da7_1a85_42f9_8c3f_c7997e85dd58.slice/crio-e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed WatchSource:0}: Error finding container e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed: Status 404 returned error can't find the container with id e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.087688 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26589ee7_3777_43d9_b378_df92780df986.slice/crio-ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77 WatchSource:0}: Error finding container ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77: Status 404 returned error can't find the container with id ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.130089 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.130431 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.630412884 +0000 UTC m=+148.047431152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.182694 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" event={"ID":"67525a8a-c8e8-469c-a60d-1676ac5b057e","Type":"ContainerStarted","Data":"eda41034772f7bdeb5d62d6d5e72efb5492b3343ea32a02892e68333b850b929"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.193338 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.195864 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.206008 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" event={"ID":"44a4515e-e65a-4069-bcfe-d84494a724cd","Type":"ContainerStarted","Data":"27c05fe0520ce257814b0e3d807c25eb76e86257c1879b3887842ce44ef2fcf1"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.209061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.215368 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"36113181730fa1f7beb2ced6c6c8a0ef6d23eb8fce143213df4f409c8dff428c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.215805 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.217511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.218820 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" event={"ID":"d6a1e674-b813-4a95-b14e-a2774f390155","Type":"ContainerStarted","Data":"6899b897eae5b2b1565ae5797a1e9ca4e653c81ed21731e840093d7888e0dc31"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.220484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" event={"ID":"2b184f73-7f44-4ddb-b344-a5a635501c7d","Type":"ContainerStarted","Data":"9c6057c154aea9504dc5e44fc8488e5f722c96abd6234e1c1dd0a168293ecd4a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.222238 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" event={"ID":"c0ad3e99-7312-4c48-bbfc-5355df896d20","Type":"ContainerStarted","Data":"d33799c0407c610df2357b8b0d4b98ad4ff169623de6bcde5686a219f69fc75a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229655 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5ad87cd-b97f-483a-825a-46c77bd5d5e0" containerID="ed02cd9d7b185c18111c340613c8ded43af8f3c079eceb18aadb241b0edf7610" exitCode=0 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerDied","Data":"ed02cd9d7b185c18111c340613c8ded43af8f3c079eceb18aadb241b0edf7610"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229756 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerStarted","Data":"0e6c61ff90668f94006eb63d0a4e0f845c2564df697e51d8d8e7863fb74c322a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.231317 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.231704 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.731689809 +0000 UTC m=+148.148707787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.252853 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.261330 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" podStartSLOduration=127.261313692 podStartE2EDuration="2m7.261313692s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.253961803 +0000 UTC m=+147.670979781" watchObservedRunningTime="2026-02-17 15:57:15.261313692 +0000 UTC m=+147.678331670" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.262635 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.277076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"8a4051d75d0a569d9ab067001b1eb1ef7ef5a2756c4abc2d56df35e7aaa688b4"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.299995 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" event={"ID":"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58","Type":"ContainerStarted","Data":"e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.303143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.321867 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerStarted","Data":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.322070 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.328833 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84cacb3d_ec7c_4a92_a265_237ea9218b5e.slice/crio-7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1 WatchSource:0}: Error finding container 7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1: Status 404 returned error can't find the container with id 7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.329790 4829 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9v7jj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.329829 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.332107 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.333337 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.833324785 +0000 UTC m=+148.250342763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.352019 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" event={"ID":"546891ca-dff6-4af9-a495-8bdd561e4233","Type":"ContainerStarted","Data":"b5596fa44d35b6a6f32181ed865a4bf2d91d05fe27abf522bc55c567f046b272"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.358199 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2sdwc" event={"ID":"f73ce613-5317-4f8e-82c9-4af380ed614c","Type":"ContainerStarted","Data":"a4b024337416c36e86a222c63d908cb1882c0fb522fcc67f558830c3af29efc4"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.358253 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" podStartSLOduration=126.35823692 podStartE2EDuration="2m6.35823692s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.313464857 +0000 UTC m=+147.730482845" watchObservedRunningTime="2026-02-17 15:57:15.35823692 +0000 UTC m=+147.775254888" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.360032 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.361267 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.361320 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.374447 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"5379e190f94a5a1d87b2808fd8f701566d6284eb3d7358e29ff99bed1c660cfe"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.374499 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"36d29f8d0d3e061013a2cb72db7d3525140ace63a2a69976bb863cc588d702e3"} Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.376933 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf1e080_f5b6_4360_a74f_5524ece2120c.slice/crio-2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475 WatchSource:0}: Error finding container 2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475: Status 404 returned error can't find the container with id 2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.388991 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"33ea74caf3f710efa1c50da2d1988bfc860823b5f31111d7825d4392f9477810"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.389042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"2d792cc359e5b53472d99af40cfbdde690b7bafbe063e4836d6b92fafb28a982"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.390838 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" podStartSLOduration=126.390825514 podStartE2EDuration="2m6.390825514s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.390635729 +0000 UTC m=+147.807653707" watchObservedRunningTime="2026-02-17 15:57:15.390825514 +0000 UTC m=+147.807843492" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.411220 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" event={"ID":"32e15283-b4a3-40c9-8117-77d662f30438","Type":"ContainerStarted","Data":"22e76edd041efcfbf0dc5da922bf8d7a594fd427c6fdb877f0b8cc65f1b3d66e"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.411261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" event={"ID":"32e15283-b4a3-40c9-8117-77d662f30438","Type":"ContainerStarted","Data":"2c5da868cc99fbe2010be26b7d97a29f7850e268d600cd5a93e76a54acb1dd40"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.422145 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" event={"ID":"76ca2091-de8d-469c-832b-057ee57bb8ee","Type":"ContainerStarted","Data":"f26fec56587359c05d33f39e5c5ae96141b78bae60e393505ecc55ab81229826"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.422189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" event={"ID":"76ca2091-de8d-469c-832b-057ee57bb8ee","Type":"ContainerStarted","Data":"3157323b0193585b7e7e8fb85389c6beed7bffc48855be8a7f3b2d4229fd2148"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.428539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5x4hf" event={"ID":"c67dea52-b0b7-4b48-80e1-54d9754487ed","Type":"ContainerStarted","Data":"be112181820fca68d7ecea086c2d913941f087334cb5af8e9f7c31bd83eae60c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.433610 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.435442 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.935430423 +0000 UTC m=+148.352448401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.454362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"b1a2ebf23b6275b9a2761e1c747235db3c3bb107da694df042aa8cf585a8d6ae"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.459373 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fq9th" podStartSLOduration=126.459359282 podStartE2EDuration="2m6.459359282s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.457334156 +0000 UTC m=+147.874352134" watchObservedRunningTime="2026-02-17 15:57:15.459359282 +0000 UTC m=+147.876377260" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.459744 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"eb17c60a5af48946ed43715152a9653aa398d60247fdda4bb18ad05bc4aa3658"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.464858 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerStarted","Data":"eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.464898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerStarted","Data":"dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.496110 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerStarted","Data":"8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.496614 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.497918 4829 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xn8fx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.497987 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.499303 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" podStartSLOduration=126.499290394 podStartE2EDuration="2m6.499290394s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.498144214 +0000 UTC m=+147.915162192" watchObservedRunningTime="2026-02-17 15:57:15.499290394 +0000 UTC m=+147.916308372" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.504497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"e87972fe228716c21ec7cecb1607e14e50dea5013a2a6768e543463984d2ebe1"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.506841 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" event={"ID":"34421a4c-a917-467e-938b-fe7e00cc76c4","Type":"ContainerStarted","Data":"178efd5fb7e07c92ea5f88e247dd25d64c95843ef475caa6ba3c9897df40ab0c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.524669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerStarted","Data":"054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531080 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerStarted","Data":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerStarted","Data":"7baa23e27dea651b430693897781e89b000dbe0f94cbc9c61bef0909c8c3ed1a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531965 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.532782 4829 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8kmp8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.532833 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.538409 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" podStartSLOduration=126.538382274 podStartE2EDuration="2m6.538382274s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.536442032 +0000 UTC m=+147.953460010" watchObservedRunningTime="2026-02-17 15:57:15.538382274 +0000 UTC m=+147.955400252" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.540075 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.542260 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.042239249 +0000 UTC m=+148.459257227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.553311 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"f2fc0b2b1d8fdbbe3cc91226fa0a74a41e4544358631bc5af3ae12552a60853d"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.577342 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" podStartSLOduration=128.57732458 podStartE2EDuration="2m8.57732458s" podCreationTimestamp="2026-02-17 15:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.574070202 +0000 UTC m=+147.991088180" watchObservedRunningTime="2026-02-17 15:57:15.57732458 +0000 UTC m=+147.994342558" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.583988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"ec7edf5ecebf89b444f3ce54ec59a1a67eb98262446dab1fd869ed6e92b9a7a7"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.593449 4829 generic.go:334] "Generic (PLEG): container finished" podID="c801e449-c529-4c10-a482-f6f3a8c24bb1" containerID="a68d67382eaa80ba8be14bf2537953dd5fa2811050d2a340647934a36708a69a" exitCode=0 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.594262 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerDied","Data":"a68d67382eaa80ba8be14bf2537953dd5fa2811050d2a340647934a36708a69a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.597077 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" event={"ID":"87a11950-91e2-4d36-9d60-341b9a6b21b2","Type":"ContainerStarted","Data":"2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.610979 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.619044 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5x4hf" podStartSLOduration=5.619027581 podStartE2EDuration="5.619027581s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.617265863 +0000 UTC m=+148.034283841" watchObservedRunningTime="2026-02-17 15:57:15.619027581 +0000 UTC m=+148.036045559" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.644459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.644822 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.144809899 +0000 UTC m=+148.561827877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.697474 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2sdwc" podStartSLOduration=126.697456457 podStartE2EDuration="2m6.697456457s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.657257047 +0000 UTC m=+148.074275025" watchObservedRunningTime="2026-02-17 15:57:15.697456457 +0000 UTC m=+148.114474435" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.746048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.747752 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.24773766 +0000 UTC m=+148.664755638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.749850 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" podStartSLOduration=126.749834297 podStartE2EDuration="2m6.749834297s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.696282625 +0000 UTC m=+148.113300603" watchObservedRunningTime="2026-02-17 15:57:15.749834297 +0000 UTC m=+148.166852275" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.781273 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podStartSLOduration=126.781254569 podStartE2EDuration="2m6.781254569s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.750422823 +0000 UTC m=+148.167440801" watchObservedRunningTime="2026-02-17 15:57:15.781254569 +0000 UTC m=+148.198272547" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.842549 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" podStartSLOduration=126.84253178 podStartE2EDuration="2m6.84253178s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.790295434 +0000 UTC m=+148.207313422" watchObservedRunningTime="2026-02-17 15:57:15.84253178 +0000 UTC m=+148.259549758" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.847808 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.848134 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.348120831 +0000 UTC m=+148.765138809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.849787 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podStartSLOduration=126.849765876 podStartE2EDuration="2m6.849765876s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.842856079 +0000 UTC m=+148.259874047" watchObservedRunningTime="2026-02-17 15:57:15.849765876 +0000 UTC m=+148.266783854" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.902980 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" podStartSLOduration=127.902960069 podStartE2EDuration="2m7.902960069s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.902512796 +0000 UTC m=+148.319530774" watchObservedRunningTime="2026-02-17 15:57:15.902960069 +0000 UTC m=+148.319978047" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.956954 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.957507 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.457493307 +0000 UTC m=+148.874511285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.036234 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:16 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:16 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:16 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.036318 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.052622 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9fgb2" podStartSLOduration=127.052610236 podStartE2EDuration="2m7.052610236s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.05164807 +0000 UTC m=+148.468666048" watchObservedRunningTime="2026-02-17 15:57:16.052610236 +0000 UTC m=+148.469628204" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.052920 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podStartSLOduration=128.052915324 podStartE2EDuration="2m8.052915324s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.011014868 +0000 UTC m=+148.428032846" watchObservedRunningTime="2026-02-17 15:57:16.052915324 +0000 UTC m=+148.469933302" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058616 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.059014 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.558998399 +0000 UTC m=+148.976016377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.062357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.077262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.095095 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" podStartSLOduration=127.095076187 podStartE2EDuration="2m7.095076187s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.092264521 +0000 UTC m=+148.509282499" watchObservedRunningTime="2026-02-17 15:57:16.095076187 +0000 UTC m=+148.512094165" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.160301 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.160644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.162692 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.662672009 +0000 UTC m=+149.079689987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.179564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.264961 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.265003 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.265286 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.765275691 +0000 UTC m=+149.182293669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.275538 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.306390 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.317867 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.367040 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.367335 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.867318687 +0000 UTC m=+149.284336665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.399866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.468227 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.468703 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.968689416 +0000 UTC m=+149.385707394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.572855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.573239 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.07322312 +0000 UTC m=+149.490241098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.657310 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"d8738063a9316455aa27c7b35c49c10c3172bf359b044237c42da3eef4744bbb"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.660886 4829 csr.go:261] certificate signing request csr-4tf5h is approved, waiting to be issued Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.669709 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"3b329ae85fc93b1598d5d767e87ae2040624d8c6a4601992fd1b1d4b2dfcd1a6"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.670438 4829 csr.go:257] certificate signing request csr-4tf5h is issued Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.679158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.679464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.17945242 +0000 UTC m=+149.596470398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.691453 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" podStartSLOduration=127.691434905 podStartE2EDuration="2m7.691434905s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.157022116 +0000 UTC m=+148.574040094" watchObservedRunningTime="2026-02-17 15:57:16.691434905 +0000 UTC m=+149.108452873" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.691668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" event={"ID":"9061d74f-5644-4fa3-8484-4bcf2508dbfa","Type":"ContainerStarted","Data":"9ccbfd6c5f7897c15d38c599d7fe0f7f6e15f334abcf0e6dc65f342f2870a50b"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.692128 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" event={"ID":"9061d74f-5644-4fa3-8484-4bcf2508dbfa","Type":"ContainerStarted","Data":"91a8b654ea6318c7bdcc2e777ebbf594c43059ccd19d43ee5e4dde06114f594c"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.703979 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" event={"ID":"d2f48424-451a-4a3a-a539-eb6ad78c8944","Type":"ContainerStarted","Data":"b6869cb48429f4a2ef61daf17cf98bf920d992f26c46ce5ea4849b674cde3857"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.704024 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" event={"ID":"d2f48424-451a-4a3a-a539-eb6ad78c8944","Type":"ContainerStarted","Data":"08cae84475e5d7689195c5c8153e01beb68dddb6bb3480c07a782359ee74fdf0"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.704394 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.719206 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720233 4829 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6c88x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720282 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" podUID="d2f48424-451a-4a3a-a539-eb6ad78c8944" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720449 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.732881 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn4qs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.732976 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.733306 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"1547c84f8887a6fa0af7b373472743c48e33c86cdfc43407b10c3f869057f845"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.733411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"296b15f58aefe25542504c198fd08590a3c9a8311f17649af14853f17ffcd7e6"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.741101 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" podStartSLOduration=127.741083671 podStartE2EDuration="2m7.741083671s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.726326361 +0000 UTC m=+149.143344339" watchObservedRunningTime="2026-02-17 15:57:16.741083671 +0000 UTC m=+149.158101639" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.741691 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" podStartSLOduration=127.741685718 podStartE2EDuration="2m7.741685718s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.691771864 +0000 UTC m=+149.108789852" watchObservedRunningTime="2026-02-17 15:57:16.741685718 +0000 UTC m=+149.158703696" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.746800 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" podStartSLOduration=127.746785645 podStartE2EDuration="2m7.746785645s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.744917865 +0000 UTC m=+149.161935843" watchObservedRunningTime="2026-02-17 15:57:16.746785645 +0000 UTC m=+149.163803623" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.756771 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"937557ef3533f2c8b77563c62228d4de2da5388be1edd73e57e3a29446cd648d"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.756807 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"6723f685b755274b9a78fdb99273b071d30191d408a9e6244c59bbb0119f3a64"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.780279 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.780638 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.28054212 +0000 UTC m=+149.697560108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.780882 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.781292 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerStarted","Data":"b1e81ad1d4a0791c0992752500dff9bd438d1dd7f49591003e0b869a61c1b227"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.781751 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.782643 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.282631938 +0000 UTC m=+149.699649916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.799987 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podStartSLOduration=127.799965007 podStartE2EDuration="2m7.799965007s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.767725773 +0000 UTC m=+149.184743751" watchObservedRunningTime="2026-02-17 15:57:16.799965007 +0000 UTC m=+149.216982985" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.812416 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"633877f7f2aa0dcecf10c5c81b060f81e687b4e2737f8b112ab0b974acaf5016"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.812456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.822498 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.837143 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" podStartSLOduration=128.837126675 podStartE2EDuration="2m8.837126675s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.836331223 +0000 UTC m=+149.253349201" watchObservedRunningTime="2026-02-17 15:57:16.837126675 +0000 UTC m=+149.254144653" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.837717 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" podStartSLOduration=127.83771079 podStartE2EDuration="2m7.83771079s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.802342742 +0000 UTC m=+149.219360720" watchObservedRunningTime="2026-02-17 15:57:16.83771079 +0000 UTC m=+149.254728768" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.848892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" event={"ID":"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58","Type":"ContainerStarted","Data":"a1910f24f6c6a7cacf9e979d638a329fe2c97f714685164f68b63982184a4981"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.881117 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" event={"ID":"34421a4c-a917-467e-938b-fe7e00cc76c4","Type":"ContainerStarted","Data":"cedd0fd2d5fccc9a02a98e40a59dca56e24aafb03b875f1fab3154761ba7c22f"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.881843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.885610 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.385593529 +0000 UTC m=+149.802611507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.885831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.891213 4829 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wj6cl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.891259 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" podUID="34421a4c-a917-467e-938b-fe7e00cc76c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.903070 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" podStartSLOduration=127.903055802 podStartE2EDuration="2m7.903055802s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.874519889 +0000 UTC m=+149.291537867" watchObservedRunningTime="2026-02-17 15:57:16.903055802 +0000 UTC m=+149.320073780" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.922031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dmlvg" event={"ID":"9b45ddda-3269-494c-b1d6-c1219a8f61db","Type":"ContainerStarted","Data":"ae6cc45f69c55d7db389700e5c08416c5c60975747df576f7f2f35a74fa04782"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.922456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dmlvg" event={"ID":"9b45ddda-3269-494c-b1d6-c1219a8f61db","Type":"ContainerStarted","Data":"cae98fe6706b1b7557a768201f72363f8d2f6b9548660e741144060d3fb2ebc8"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.925923 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" podStartSLOduration=127.925905262 podStartE2EDuration="2m7.925905262s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.907373899 +0000 UTC m=+149.324391867" watchObservedRunningTime="2026-02-17 15:57:16.925905262 +0000 UTC m=+149.342923240" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.958943 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"0b1f99fc51614f4b7fc9afa656921bbdfccae9d934a2bc74385cb3ce76dc2acb"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.973022 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" podStartSLOduration=127.973002429 podStartE2EDuration="2m7.973002429s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.928334028 +0000 UTC m=+149.345352006" watchObservedRunningTime="2026-02-17 15:57:16.973002429 +0000 UTC m=+149.390020417" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.973198 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dmlvg" podStartSLOduration=6.973193894 podStartE2EDuration="6.973193894s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.970352757 +0000 UTC m=+149.387370735" watchObservedRunningTime="2026-02-17 15:57:16.973193894 +0000 UTC m=+149.390211872" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.985909 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.994689 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.494673326 +0000 UTC m=+149.911691294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.015200 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:17 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:17 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:17 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.015248 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.049299 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" event={"ID":"1bf1e080-f5b6-4360-a74f-5524ece2120c","Type":"ContainerStarted","Data":"a512859ca31c760893fb4c1cc711494b226e8ad4c97534f217d50f1afaa5bc34"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.049338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" event={"ID":"1bf1e080-f5b6-4360-a74f-5524ece2120c","Type":"ContainerStarted","Data":"2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.070669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" podStartSLOduration=128.070650746 podStartE2EDuration="2m8.070650746s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.023126668 +0000 UTC m=+149.440144646" watchObservedRunningTime="2026-02-17 15:57:17.070650746 +0000 UTC m=+149.487668714" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076445 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"831941afd1b5f7e2e1478ae4342a5185c00290bf9f671a226ad456512c9727d8"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"3221ce37891012f56a8d7ec178ce30eb7e76a1d1c93de1b3e7f08982c8cb3e4a"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076492 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"d4adbdae49159dd3878f255eb9972440e57739daebf6bdd412077b66445ac73a"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.084524 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" podStartSLOduration=128.084508502 podStartE2EDuration="2m8.084508502s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.082628461 +0000 UTC m=+149.499646439" watchObservedRunningTime="2026-02-17 15:57:17.084508502 +0000 UTC m=+149.501526480" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.088274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.089149 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.589134207 +0000 UTC m=+150.006152185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.091917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerStarted","Data":"726e77e8162f3984c39e705776a8363dd40b05c8f0057d8cd04ec0dc488a2857"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.093488 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" event={"ID":"d6a1e674-b813-4a95-b14e-a2774f390155","Type":"ContainerStarted","Data":"43af76111f13869523242abaecf7ef61a624193affdd0ef4088c5f9d75c04cb3"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.096165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"d38b5a6ddeeb1117fd0f7d5af102725ff891c08f604cb48a5b370b61f04ec506"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.098308 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" event={"ID":"c0ad3e99-7312-4c48-bbfc-5355df896d20","Type":"ContainerStarted","Data":"befdfc9584a897dc19ead991a881040e7048710bc4a9c1f085df1c1c7fc95cae"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.098898 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.101408 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"481d865e25c664226e79682536a82dcc4bd81b19e0315cfcb10786ca946883f5"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.101429 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"3659c6fd523df2df7e164fe3b9b35230f92c34c59862e8684f6a8beee303f58f"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.102924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerStarted","Data":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.103517 4829 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xn8fx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.103542 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.110313 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" podStartSLOduration=128.110301981 podStartE2EDuration="2m8.110301981s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.107983228 +0000 UTC m=+149.525001206" watchObservedRunningTime="2026-02-17 15:57:17.110301981 +0000 UTC m=+149.527319959" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.148896 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" event={"ID":"87a11950-91e2-4d36-9d60-341b9a6b21b2","Type":"ContainerStarted","Data":"64f80805350a7166c111ca1105c4fc9581caebb1f5d00e83c7b51977866db4bd"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.151140 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" podStartSLOduration=128.151131068 podStartE2EDuration="2m8.151131068s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.149644498 +0000 UTC m=+149.566662476" watchObservedRunningTime="2026-02-17 15:57:17.151131068 +0000 UTC m=+149.568149046" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.192663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.193888 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.693876937 +0000 UTC m=+150.110894915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.200978 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"863eb000e928639403baef8d73809eaee49c1644f0f46b7f5ad5165d8ae72507"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.203138 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.203177 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.212981 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.214444 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" podStartSLOduration=128.214433404 podStartE2EDuration="2m8.214433404s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.213345825 +0000 UTC m=+149.630363803" watchObservedRunningTime="2026-02-17 15:57:17.214433404 +0000 UTC m=+149.631451382" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.258433 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" podStartSLOduration=128.258419217 podStartE2EDuration="2m8.258419217s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.255117457 +0000 UTC m=+149.672135435" watchObservedRunningTime="2026-02-17 15:57:17.258419217 +0000 UTC m=+149.675437195" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.273413 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" podStartSLOduration=128.273396292 podStartE2EDuration="2m8.273396292s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.272820397 +0000 UTC m=+149.689838375" watchObservedRunningTime="2026-02-17 15:57:17.273396292 +0000 UTC m=+149.690414270" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.296498 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.298114 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.798088762 +0000 UTC m=+150.215106740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.299319 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.303893 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.803859408 +0000 UTC m=+150.220877386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.314888 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" podStartSLOduration=128.314863597 podStartE2EDuration="2m8.314863597s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.301038382 +0000 UTC m=+149.718056370" watchObservedRunningTime="2026-02-17 15:57:17.314863597 +0000 UTC m=+149.731881575" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.416070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.416344 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.916329927 +0000 UTC m=+150.333347905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.457090 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" podStartSLOduration=129.457074913 podStartE2EDuration="2m9.457074913s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.390839977 +0000 UTC m=+149.807857955" watchObservedRunningTime="2026-02-17 15:57:17.457074913 +0000 UTC m=+149.874092891" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.517230 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.517552 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.017540482 +0000 UTC m=+150.434558460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.542085 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.619380 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.619836 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.119817505 +0000 UTC m=+150.536835483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.672436 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 15:52:16 +0000 UTC, rotation deadline is 2026-12-14 20:15:18.757333616 +0000 UTC Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.672730 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7204h18m1.084606306s for next certificate rotation Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.720955 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.721289 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.221278305 +0000 UTC m=+150.638296283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.736125 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.736652 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.793374 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.793423 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.821872 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.822237 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.322222502 +0000 UTC m=+150.739240470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.923026 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.923550 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.423539589 +0000 UTC m=+150.840557567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.000114 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:18 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:18 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:18 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.000160 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.024610 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.024886 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.524860996 +0000 UTC m=+150.941878974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.024982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.025271 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.525263536 +0000 UTC m=+150.942281514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.099709 4829 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hpnl2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.099774 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" podUID="c0ad3e99-7312-4c48-bbfc-5355df896d20" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.126104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.126258 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.626239115 +0000 UTC m=+151.043257093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.126451 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.126755 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.626744808 +0000 UTC m=+151.043762786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.207086 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"d41b991d10b6766ee512d7aae8b46900e6cffbbf2648151a449eb6ad40c72622"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.208846 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"65f946857047d98153311062180757a73e6eeddd287a4330d203ed29423d9e58"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.208953 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.210312 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"9e14e1a03a60219c0ef53547850b97729227c7a6e1e17cc1d411ea1866f73cfe"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.211660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3a380a1770f0bb511732fcc1623a1e5479af7d675e765af40ac262b823836216"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.211704 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"43f022f41f64d1f1b764b9e81c31205378f11145f4f781121cd851f3b4fbcff0"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7b72de055bcc4f0a409c26a96620551e2a27114bd83ca51aeff554d64617b848"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212849 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"76e0b2b19feb9de939b6f44585b1cbf15e1d2194f62da4593d8290e18d6a5523"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212991 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.214000 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"229143a17a645cb998990e9718ade2541120d9254779d36b0c5dcf21436b325f"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.214042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e0b6e998edb4dcdc67fd15619330062fc752aba15ac979bd8b57b8d4bf05739"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.215180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"6a22613625cade5750324cad03dcbf97c046ca6d64eb183613ac0b204d9f1fcb"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.216956 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn4qs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.216993 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.225492 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.228081 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.228220 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.728201498 +0000 UTC m=+151.145219476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.228353 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.228677 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.728669402 +0000 UTC m=+151.145687380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.244083 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" podStartSLOduration=129.244067619 podStartE2EDuration="2m9.244067619s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:18.238819867 +0000 UTC m=+150.655837845" watchObservedRunningTime="2026-02-17 15:57:18.244067619 +0000 UTC m=+150.661085597" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.285220 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.329381 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.329480 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.829457654 +0000 UTC m=+151.246475632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.332170 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.335629 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.835617861 +0000 UTC m=+151.252635839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.359037 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pcvww" podStartSLOduration=8.359015796 podStartE2EDuration="8.359015796s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:18.357034961 +0000 UTC m=+150.774052939" watchObservedRunningTime="2026-02-17 15:57:18.359015796 +0000 UTC m=+150.776033774" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.378756 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.432700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.433835 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.933813653 +0000 UTC m=+151.350831631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.537019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.537433 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.037421222 +0000 UTC m=+151.454439200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.638527 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.638812 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.138788181 +0000 UTC m=+151.555806159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.740007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.740503 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.240472087 +0000 UTC m=+151.657490065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.761043 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.844388 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.844597 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.344553269 +0000 UTC m=+151.761571247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.844755 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.845121 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.345114334 +0000 UTC m=+151.762132312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.865188 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.945466 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.945715 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.445689301 +0000 UTC m=+151.862707269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.945771 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.946201 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.446169594 +0000 UTC m=+151.863187572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.000709 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:19 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.000797 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.047327 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.047456 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.547435529 +0000 UTC m=+151.964453507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.047628 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.047921 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.547913713 +0000 UTC m=+151.964931691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.148968 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.149305 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.649289801 +0000 UTC m=+152.066307779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.209697 4829 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pdm8f container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]log ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]etcd ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 15:57:19 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-startinformers ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:57:19 crc kubenswrapper[4829]: livez check failed Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.209755 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" podUID="8bea1514-e813-4a49-80fb-cb8de9827a40" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.234685 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"9ce86529239be6427a836aac4379fc901e154f11f0a7c8e81c6f33235f7e23cf"} Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.248843 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.250185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.250690 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.75066384 +0000 UTC m=+152.167681818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.257330 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.269717 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.350803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.351051 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.85102274 +0000 UTC m=+152.268040718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.351437 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.355659 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.855648996 +0000 UTC m=+152.272666964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.452558 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.452686 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.952660036 +0000 UTC m=+152.369678024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.452784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.453035 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.953022625 +0000 UTC m=+152.370040603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.554499 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.554747 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.054718353 +0000 UTC m=+152.471736331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.555143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.555500 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.055483953 +0000 UTC m=+152.472501931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.653306 4829 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.656348 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.656537 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.156509772 +0000 UTC m=+152.573527740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.656706 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.657068 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.157061557 +0000 UTC m=+152.574079535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.757317 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.757527 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.257504191 +0000 UTC m=+152.674522169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.757853 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.758119 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.258105937 +0000 UTC m=+152.675123915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.784214 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.785248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.787883 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.799767 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.859024 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.859174 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.359149776 +0000 UTC m=+152.776167754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.859234 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.859522 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.359509875 +0000 UTC m=+152.776527853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.960898 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.961067 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.461039379 +0000 UTC m=+152.878057357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961207 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961233 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961290 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.961674 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.461658835 +0000 UTC m=+152.878676813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.986207 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.987078 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.989098 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.000709 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:20 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:20 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:20 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.000760 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.008376 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.062834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.062989 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.562950682 +0000 UTC m=+152.979968660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063136 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.063520 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.563512147 +0000 UTC m=+152.980530125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.064043 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.109631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.163893 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.164069 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.664041702 +0000 UTC m=+153.081059680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164413 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.164427 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.664413312 +0000 UTC m=+153.081431370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.179433 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.180291 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.189330 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.253231 4829 generic.go:334] "Generic (PLEG): container finished" podID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerID="eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896" exitCode=0 Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.253325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerDied","Data":"eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.255481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"35d90a2a6c53823db40d88de375ba86f18474c7e6fd718e0c4eb00068dfae0dd"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.255522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"d8908f9d0e1e550ade00fc370466c6ed9b445cf0b8ee93135fd47d046d41d94f"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265746 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265927 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265977 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.266353 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.266481 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.766467699 +0000 UTC m=+153.183485677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.266702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.280680 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.281224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.283323 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.284877 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.341299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.359814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368639 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.369213 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.869202784 +0000 UTC m=+153.286220752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.389084 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.390010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.397879 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.407841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" podStartSLOduration=10.407825212 podStartE2EDuration="10.407825212s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:20.405858539 +0000 UTC m=+152.822876517" watchObservedRunningTime="2026-02-17 15:57:20.407825212 +0000 UTC m=+152.824843190" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.412850 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.467698 4829 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T15:57:19.653334416Z","Handler":null,"Name":""} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470163 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470454 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470585 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.471067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.471188 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.971144608 +0000 UTC m=+153.388162586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.471429 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.494676 4829 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.494721 4829 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.499630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572604 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572730 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.573187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.582603 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.582652 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.600275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.617181 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679392 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679515 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.680332 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.680643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.682958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.715387 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.762985 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.780095 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.792677 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.796228 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.812831 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.904745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.934296 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:20 crc kubenswrapper[4829]: W0217 15:57:20.959765 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5cfa35_799d_41b4_afa1_e5d056ceed8c.slice/crio-528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4 WatchSource:0}: Error finding container 528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4: Status 404 returned error can't find the container with id 528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.002506 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:21 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:21 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:21 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.002556 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.006882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.186199 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.197893 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddd19c165_e47a_4b7f_aaf1_cd266eeb9cc1.slice/crio-337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a WatchSource:0}: Error finding container 337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a: Status 404 returned error can't find the container with id 337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.245892 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.249966 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958bc260_664c_466f_afd3_9a7ac9c119bf.slice/crio-e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf WatchSource:0}: Error finding container e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf: Status 404 returned error can't find the container with id e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262026 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262079 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.267318 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerStarted","Data":"337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.273687 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.274177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.284533 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.287478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289342 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289495 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289535 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"9f6b76db525ea1716f4c1ce5158f77a01ac87265be5d53578be8975ef1a1c0b8"} Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.301354 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc817ced_7abe_422d_af13_779118b5fe0f.slice/crio-e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542 WatchSource:0}: Error finding container e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542: Status 404 returned error can't find the container with id e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.548423 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.693923 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.694018 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.694045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.695193 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.700226 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p" (OuterVolumeSpecName: "kube-api-access-rnj6p") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "kube-api-access-rnj6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.700601 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795112 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795488 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795503 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981034 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:21 crc kubenswrapper[4829]: E0217 15:57:21.981220 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981230 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981323 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.982024 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.984348 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.994507 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.003445 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:22 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:22 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:22 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.003501 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099275 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099667 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201120 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201155 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201847 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.203589 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.220216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.307033 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.310469 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.336958 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerStarted","Data":"37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.337020 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerStarted","Data":"e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.337987 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.360997 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerID="89bc178927ed753306d120abe1c9fd96720b7ede9c5f70c06adb09dd17ed7ea0" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.361111 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerDied","Data":"89bc178927ed753306d120abe1c9fd96720b7ede9c5f70c06adb09dd17ed7ea0"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerDied","Data":"dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365799 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365928 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.369626 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.369683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.380031 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" podStartSLOduration=133.380016711 podStartE2EDuration="2m13.380016711s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:22.379527667 +0000 UTC m=+154.796545655" watchObservedRunningTime="2026-02-17 15:57:22.380016711 +0000 UTC m=+154.797034689" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383551 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383635 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383665 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"5acc356c5d2ec47c5d87b88d2204b71dfd80af3eab05b77d8870f888eb4da2ab"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.402975 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.403900 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.413211 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.425329 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.425384 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.512455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.512931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.513001 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.604509 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617430 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.619224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.620060 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.639238 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.739975 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.743426 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.749949 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.979942 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.981340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.990110 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.995427 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.997963 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.006524 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:23 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:23 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:23 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.006593 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134377 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.142766 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.142820 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.144175 4829 patch_prober.go:28] interesting pod/console-f9d7485db-9fgb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.144208 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9fgb2" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178715 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178784 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178887 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178931 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.182944 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:23 crc kubenswrapper[4829]: W0217 15:57:23.194713 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43b8d950_926a_4dc1_82a3_be0e61618dff.slice/crio-e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259 WatchSource:0}: Error finding container e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259: Status 404 returned error can't find the container with id e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.238921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.239034 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.239083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.240038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.240363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.273505 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.309813 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.394543 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.396100 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.440212 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529094 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" exitCode=0 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529239 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerStarted","Data":"d19f6da1913041c5fd10e98efa71ae0ed6c2d8facfc11c2aa17840a88a15c77f"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.533190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerStarted","Data":"e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554382 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655679 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655768 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.657516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.658840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.682554 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.759092 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.768551 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:23 crc kubenswrapper[4829]: W0217 15:57:23.814043 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8370c4f_c05e_425c_a267_c270e36b5dfd.slice/crio-d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4 WatchSource:0}: Error finding container d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4: Status 404 returned error can't find the container with id d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.843992 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.845780 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.852789 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.852800 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.858897 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.898828 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.960293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.960339 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.012641 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:24 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:24 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:24 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.012961 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.068667 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.068803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.069026 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.069053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.071048 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" (UID: "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.071173 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.074795 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" (UID: "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.084513 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.175209 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.175245 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.198730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.363415 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:24 crc kubenswrapper[4829]: W0217 15:57:24.405476 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dfe32e4_aee9_408a_9b01_4ab9f4da515f.slice/crio-f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75 WatchSource:0}: Error finding container f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75: Status 404 returned error can't find the container with id f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerDied","Data":"337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565442 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565506 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569567 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" exitCode=0 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569614 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.573686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.577023 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e" exitCode=0 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.577735 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.692757 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:24 crc kubenswrapper[4829]: W0217 15:57:24.727669 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee0fd92e_e4d2_4523_97bd_58e10e78bc41.slice/crio-dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d WatchSource:0}: Error finding container dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d: Status 404 returned error can't find the container with id dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.000850 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:25 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:25 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:25 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.000914 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.587036 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.587434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.590617 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerStarted","Data":"99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.590658 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerStarted","Data":"dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.641118 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.641102733 podStartE2EDuration="2.641102733s" podCreationTimestamp="2026-02-17 15:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:25.640175958 +0000 UTC m=+158.057193936" watchObservedRunningTime="2026-02-17 15:57:25.641102733 +0000 UTC m=+158.058120711" Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.000989 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:26 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:26 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:26 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.001043 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.638320 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerID="99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93" exitCode=0 Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.638707 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerDied","Data":"99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93"} Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.000011 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:27 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:27 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:27 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.000109 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.979525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.001555 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.008704 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.074830 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.074874 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.075909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee0fd92e-e4d2-4523-97bd-58e10e78bc41" (UID: "ee0fd92e-e4d2-4523-97bd-58e10e78bc41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.081487 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee0fd92e-e4d2-4523-97bd-58e10e78bc41" (UID: "ee0fd92e-e4d2-4523-97bd-58e10e78bc41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.175949 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.175980 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.495993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684516 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerDied","Data":"dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d"} Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684640 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.327810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.333944 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.608325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.184961 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.272904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.280562 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:40 crc kubenswrapper[4829]: I0217 15:57:40.773183 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.908044 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.908672 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bzhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pc95c_openshift-marketplace(958bc260-664c-466f-afd3-9a7ac9c119bf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.909948 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" Feb 17 15:57:52 crc kubenswrapper[4829]: I0217 15:57:52.424393 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:52 crc kubenswrapper[4829]: I0217 15:57:52.424442 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.200186 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.284140 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.284281 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-429d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cd6xf_openshift-marketplace(8d559324-3a7f-41a3-9229-b2b96294faad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.286248 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.739106 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.743283 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:57:53 crc kubenswrapper[4829]: W0217 15:57:53.758395 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c29406b_a65e_4386_8f7c_ac9dc76fb4cb.slice/crio-bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e WatchSource:0}: Error finding container bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e: Status 404 returned error can't find the container with id bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.827283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.845642 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692" exitCode=0 Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.845760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.867960 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" exitCode=0 Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.868279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.883903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.889371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.891531 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.895135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d"} Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.900356 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.900643 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.900850 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.903778 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.903843 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.913244 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"128e5311b92e0ff5adac5b190ca185777df7094e564e6e77f54a20afef790025"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.915156 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.915327 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.917688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.917529 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" exitCode=0 Feb 17 15:57:55 crc kubenswrapper[4829]: I0217 15:57:55.946272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"ac628cf13344886cb954b95a68ba728d2c1763eba31ef74a7471eb425d7f3b99"} Feb 17 15:57:55 crc kubenswrapper[4829]: I0217 15:57:55.966104 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xdb29" podStartSLOduration=166.966089207 podStartE2EDuration="2m46.966089207s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:55.96547636 +0000 UTC m=+188.382494348" watchObservedRunningTime="2026-02-17 15:57:55.966089207 +0000 UTC m=+188.383107185" Feb 17 15:57:56 crc kubenswrapper[4829]: I0217 15:57:56.406104 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.959266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425"} Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.961894 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerStarted","Data":"22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f"} Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.977106 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5whh" podStartSLOduration=4.620229058 podStartE2EDuration="35.977093258s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="2026-02-17 15:57:24.579325126 +0000 UTC m=+156.996343104" lastFinishedPulling="2026-02-17 15:57:55.936189326 +0000 UTC m=+188.353207304" observedRunningTime="2026-02-17 15:57:57.975863255 +0000 UTC m=+190.392881253" watchObservedRunningTime="2026-02-17 15:57:57.977093258 +0000 UTC m=+190.394111236" Feb 17 15:57:58 crc kubenswrapper[4829]: I0217 15:57:58.985768 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plxhn" podStartSLOduration=4.431701224 podStartE2EDuration="39.985743554s" podCreationTimestamp="2026-02-17 15:57:19 +0000 UTC" firstStartedPulling="2026-02-17 15:57:21.273380318 +0000 UTC m=+153.690398286" lastFinishedPulling="2026-02-17 15:57:56.827422618 +0000 UTC m=+189.244440616" observedRunningTime="2026-02-17 15:57:58.984763187 +0000 UTC m=+191.401781175" watchObservedRunningTime="2026-02-17 15:57:58.985743554 +0000 UTC m=+191.402761562" Feb 17 15:57:59 crc kubenswrapper[4829]: I0217 15:57:59.986885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerStarted","Data":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.004317 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lg78k" podStartSLOduration=3.630089022 podStartE2EDuration="39.004301408s" podCreationTimestamp="2026-02-17 15:57:21 +0000 UTC" firstStartedPulling="2026-02-17 15:57:23.532128825 +0000 UTC m=+155.949146803" lastFinishedPulling="2026-02-17 15:57:58.906341221 +0000 UTC m=+191.323359189" observedRunningTime="2026-02-17 15:58:00.002696915 +0000 UTC m=+192.419714883" watchObservedRunningTime="2026-02-17 15:58:00.004301408 +0000 UTC m=+192.421319386" Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.600531 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.600866 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436026 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:01 crc kubenswrapper[4829]: E0217 15:58:01.436409 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436420 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: E0217 15:58:01.436431 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436437 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436547 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436561 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436926 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.439110 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.439670 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.458187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.525408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.525545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626750 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626921 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.649160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.751982 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.004800 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.007323 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.009093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.030005 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pzvbr" podStartSLOduration=3.4391612289999998 podStartE2EDuration="40.029991438s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="2026-02-17 15:57:24.572603275 +0000 UTC m=+156.989621253" lastFinishedPulling="2026-02-17 15:58:01.163433484 +0000 UTC m=+193.580451462" observedRunningTime="2026-02-17 15:58:02.029592947 +0000 UTC m=+194.446610925" watchObservedRunningTime="2026-02-17 15:58:02.029991438 +0000 UTC m=+194.447009416" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.031962 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-plxhn" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:02 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:02 crc kubenswrapper[4829]: > Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.046030 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4qsx" podStartSLOduration=3.264362316 podStartE2EDuration="43.046013243s" podCreationTimestamp="2026-02-17 15:57:19 +0000 UTC" firstStartedPulling="2026-02-17 15:57:21.296765962 +0000 UTC m=+153.713783940" lastFinishedPulling="2026-02-17 15:58:01.078416889 +0000 UTC m=+193.495434867" observedRunningTime="2026-02-17 15:58:02.043533895 +0000 UTC m=+194.460551873" watchObservedRunningTime="2026-02-17 15:58:02.046013243 +0000 UTC m=+194.463031221" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.062969 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fpmz" podStartSLOduration=3.9366251979999998 podStartE2EDuration="39.062953392s" podCreationTimestamp="2026-02-17 15:57:23 +0000 UTC" firstStartedPulling="2026-02-17 15:57:25.589479724 +0000 UTC m=+158.006497702" lastFinishedPulling="2026-02-17 15:58:00.715807918 +0000 UTC m=+193.132825896" observedRunningTime="2026-02-17 15:58:02.060182527 +0000 UTC m=+194.477200505" watchObservedRunningTime="2026-02-17 15:58:02.062953392 +0000 UTC m=+194.479971370" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.196391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:02 crc kubenswrapper[4829]: W0217 15:58:02.206908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8581faef_5460_4e6b_8102_ba36b8a2c6b6.slice/crio-614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444 WatchSource:0}: Error finding container 614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444: Status 404 returned error can't find the container with id 614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444 Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.308900 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.308945 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.750627 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.751440 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.797792 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.016826 4829 generic.go:334] "Generic (PLEG): container finished" podID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerID="a95346330ded3294f170b3b328f3cf8dcf6cfbd212834348dcf817d1dbf1a33c" exitCode=0 Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.017420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerDied","Data":"a95346330ded3294f170b3b328f3cf8dcf6cfbd212834348dcf817d1dbf1a33c"} Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.017450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerStarted","Data":"614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444"} Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.063978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.310586 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.310850 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.375980 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lg78k" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:03 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:03 crc kubenswrapper[4829]: > Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.760505 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.760540 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.301679 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.345180 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pzvbr" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:04 crc kubenswrapper[4829]: > Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.370864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.370919 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.371160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8581faef-5460-4e6b-8102-ba36b8a2c6b6" (UID: "8581faef-5460-4e6b-8102-ba36b8a2c6b6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.375771 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8581faef-5460-4e6b-8102-ba36b8a2c6b6" (UID: "8581faef-5460-4e6b-8102-ba36b8a2c6b6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.472609 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.472642 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.792646 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fpmz" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:04 crc kubenswrapper[4829]: > Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.028339 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.029280 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerDied","Data":"614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444"} Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.029325 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444" Feb 17 15:58:06 crc kubenswrapper[4829]: I0217 15:58:06.401938 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:06 crc kubenswrapper[4829]: I0217 15:58:06.402173 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5whh" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" containerID="cri-o://22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" gracePeriod=2 Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.047934 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" exitCode=0 Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.048338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f"} Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.188830 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.322826 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.322959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.323066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.324643 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities" (OuterVolumeSpecName: "utilities") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.328008 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk" (OuterVolumeSpecName: "kube-api-access-jsznk") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "kube-api-access-jsznk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.345270 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424438 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424470 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424480 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.037996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038373 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038422 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-utilities" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038436 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-utilities" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038452 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038465 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038490 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-content" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038502 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-content" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038699 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038739 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.039297 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.042017 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.042088 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.047636 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067136 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259"} Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067204 4829 scope.go:117] "RemoveContainer" containerID="22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067263 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.086656 4829 scope.go:117] "RemoveContainer" containerID="b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.107008 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.111006 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.123760 4829 scope.go:117] "RemoveContainer" containerID="8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143247 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143346 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143507 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247441 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.300433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.363666 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.764777 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.074918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerStarted","Data":"fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f"} Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.287525 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" path="/var/lib/kubelet/pods/43b8d950-926a-4dc1-82a3-be0e61618dff/volumes" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.398594 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.398921 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.459657 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.679980 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.730441 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.079665 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerStarted","Data":"02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.082460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.084856 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.098006 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.097988063 podStartE2EDuration="2.097988063s" podCreationTimestamp="2026-02-17 15:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:11.096009518 +0000 UTC m=+203.513027506" watchObservedRunningTime="2026-02-17 15:58:11.097988063 +0000 UTC m=+203.515006041" Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.136148 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.091817 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.091930 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.098459 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.099480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.382064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.444233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.106255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.111972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.137432 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd6xf" podStartSLOduration=2.986294729 podStartE2EDuration="53.137410239s" podCreationTimestamp="2026-02-17 15:57:20 +0000 UTC" firstStartedPulling="2026-02-17 15:57:22.39991112 +0000 UTC m=+154.816929098" lastFinishedPulling="2026-02-17 15:58:12.55102663 +0000 UTC m=+204.968044608" observedRunningTime="2026-02-17 15:58:13.131842306 +0000 UTC m=+205.548860314" watchObservedRunningTime="2026-02-17 15:58:13.137410239 +0000 UTC m=+205.554428237" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.153170 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pc95c" podStartSLOduration=2.967500191 podStartE2EDuration="53.153147012s" podCreationTimestamp="2026-02-17 15:57:20 +0000 UTC" firstStartedPulling="2026-02-17 15:57:22.37667649 +0000 UTC m=+154.793694468" lastFinishedPulling="2026-02-17 15:58:12.562323311 +0000 UTC m=+204.979341289" observedRunningTime="2026-02-17 15:58:13.148068963 +0000 UTC m=+205.565086971" watchObservedRunningTime="2026-02-17 15:58:13.153147012 +0000 UTC m=+205.570165020" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.350127 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.401295 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.808949 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.865793 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:17 crc kubenswrapper[4829]: I0217 15:58:17.603461 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:17 crc kubenswrapper[4829]: I0217 15:58:17.604109 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fpmz" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" containerID="cri-o://629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" gracePeriod=2 Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.012962 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057446 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057675 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057781 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.058931 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities" (OuterVolumeSpecName: "utilities") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.068816 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8" (OuterVolumeSpecName: "kube-api-access-5zjb8") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "kube-api-access-5zjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.149957 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" exitCode=0 Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75"} Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150169 4829 scope.go:117] "RemoveContainer" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150079 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.159137 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.159159 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.169233 4829 scope.go:117] "RemoveContainer" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.205296 4829 scope.go:117] "RemoveContainer" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.224035 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.225494 4829 scope.go:117] "RemoveContainer" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.226868 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": container with ID starting with 629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1 not found: ID does not exist" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.227197 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} err="failed to get container status \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": rpc error: code = NotFound desc = could not find container \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": container with ID starting with 629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1 not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.227320 4829 scope.go:117] "RemoveContainer" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.227931 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": container with ID starting with 98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5 not found: ID does not exist" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228009 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} err="failed to get container status \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": rpc error: code = NotFound desc = could not find container \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": container with ID starting with 98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5 not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228044 4829 scope.go:117] "RemoveContainer" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.228444 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": container with ID starting with f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd not found: ID does not exist" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228633 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd"} err="failed to get container status \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": rpc error: code = NotFound desc = could not find container \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": container with ID starting with f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.260737 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.474373 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.477967 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.293900 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" path="/var/lib/kubelet/pods/0dfe32e4-aee9-408a-9b01-4ab9f4da515f/volumes" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.796866 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.797327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.862820 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.007957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.008393 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.068842 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.244033 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.247805 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.808145 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.424981 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425075 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425140 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425964 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.426069 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" gracePeriod=600 Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.805634 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185430 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" exitCode=0 Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185690 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" containerID="cri-o://77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" gracePeriod=2 Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.521256 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631185 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.632084 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities" (OuterVolumeSpecName: "utilities") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.647345 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6" (OuterVolumeSpecName: "kube-api-access-429d6") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "kube-api-access-429d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.683232 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732314 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732374 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732390 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.194906 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" exitCode=0 Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195026 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"5acc356c5d2ec47c5d87b88d2204b71dfd80af3eab05b77d8870f888eb4da2ab"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195040 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195107 4829 scope.go:117] "RemoveContainer" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.198687 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.199109 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" containerID="cri-o://311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" gracePeriod=2 Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.231791 4829 scope.go:117] "RemoveContainer" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.258850 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.266199 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.284991 4829 scope.go:117] "RemoveContainer" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.288261 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" path="/var/lib/kubelet/pods/8d559324-3a7f-41a3-9229-b2b96294faad/volumes" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.304516 4829 scope.go:117] "RemoveContainer" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.305136 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": container with ID starting with 77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e not found: ID does not exist" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305186 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} err="failed to get container status \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": rpc error: code = NotFound desc = could not find container \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": container with ID starting with 77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305221 4829 scope.go:117] "RemoveContainer" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.305829 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": container with ID starting with 63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210 not found: ID does not exist" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305873 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} err="failed to get container status \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": rpc error: code = NotFound desc = could not find container \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": container with ID starting with 63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210 not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305892 4829 scope.go:117] "RemoveContainer" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.306399 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": container with ID starting with d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef not found: ID does not exist" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.306649 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef"} err="failed to get container status \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": rpc error: code = NotFound desc = could not find container \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": container with ID starting with d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.571824 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646492 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646633 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646660 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.647781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities" (OuterVolumeSpecName: "utilities") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.652090 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg" (OuterVolumeSpecName: "kube-api-access-5bzhg") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "kube-api-access-5bzhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.720186 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748358 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748413 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748428 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225203 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" exitCode=0 Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225346 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225387 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf"} Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225403 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225417 4829 scope.go:117] "RemoveContainer" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.257435 4829 scope.go:117] "RemoveContainer" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.288465 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.297345 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.297881 4829 scope.go:117] "RemoveContainer" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.317468 4829 scope.go:117] "RemoveContainer" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.318317 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": container with ID starting with 311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344 not found: ID does not exist" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.318407 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} err="failed to get container status \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": rpc error: code = NotFound desc = could not find container \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": container with ID starting with 311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344 not found: ID does not exist" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.318464 4829 scope.go:117] "RemoveContainer" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.319292 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": container with ID starting with 20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58 not found: ID does not exist" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.319351 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} err="failed to get container status \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": rpc error: code = NotFound desc = could not find container \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": container with ID starting with 20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58 not found: ID does not exist" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.319391 4829 scope.go:117] "RemoveContainer" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.320201 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": container with ID starting with b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045 not found: ID does not exist" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.320447 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045"} err="failed to get container status \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": rpc error: code = NotFound desc = could not find container \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": container with ID starting with b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045 not found: ID does not exist" Feb 17 15:58:26 crc kubenswrapper[4829]: I0217 15:58:26.292859 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" path="/var/lib/kubelet/pods/958bc260-664c-466f-afd3-9a7ac9c119bf/volumes" Feb 17 15:58:32 crc kubenswrapper[4829]: I0217 15:58:32.376316 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.872483 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.874649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.874947 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875074 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875192 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875312 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875439 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875563 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875722 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875846 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875969 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876107 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876231 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876360 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876485 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876640 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876774 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876895 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877014 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877305 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877444 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877567 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878204 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878400 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878322 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878648 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879002 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879046 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879065 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879079 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878694 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879102 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879115 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879135 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879147 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879169 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879182 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879204 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879216 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879238 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879250 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879407 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879426 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879445 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879470 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879485 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879511 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878708 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878721 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878735 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.883317 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075358 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075393 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075475 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075508 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075533 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075556 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075583 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.084388 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.084825 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085211 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085741 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085976 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.086011 4829 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.086318 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176642 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176679 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176801 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176814 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176971 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177005 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177045 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177193 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.286946 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.381128 4829 generic.go:334] "Generic (PLEG): container finished" podID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerID="02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.381217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerDied","Data":"02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351"} Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.382236 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.384240 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.385432 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386463 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386492 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386505 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386514 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" exitCode=2 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386598 4829 scope.go:117] "RemoveContainer" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.688555 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.396362 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:49 crc kubenswrapper[4829]: E0217 15:58:49.489749 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.647366 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.648368 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796598 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796794 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock" (OuterVolumeSpecName: "var-lock") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.797251 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.797291 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.804609 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.898364 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.243922 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.245239 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.245881 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.246423 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405291 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405382 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405480 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406122 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.409344 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410482 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" exitCode=0 Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410646 4829 scope.go:117] "RemoveContainer" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410682 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.412738 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413216 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413312 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerDied","Data":"fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f"} Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413347 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413541 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.421345 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.421888 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.440120 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.440733 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.441444 4829 scope.go:117] "RemoveContainer" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.474946 4829 scope.go:117] "RemoveContainer" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.497955 4829 scope.go:117] "RemoveContainer" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.506970 4829 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.507019 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.507036 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.518892 4829 scope.go:117] "RemoveContainer" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.540522 4829 scope.go:117] "RemoveContainer" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.564201 4829 scope.go:117] "RemoveContainer" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.564994 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": container with ID starting with 978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973 not found: ID does not exist" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.565118 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973"} err="failed to get container status \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": rpc error: code = NotFound desc = could not find container \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": container with ID starting with 978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973 not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.565203 4829 scope.go:117] "RemoveContainer" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.567318 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": container with ID starting with 6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab not found: ID does not exist" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567350 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab"} err="failed to get container status \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": rpc error: code = NotFound desc = could not find container \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": container with ID starting with 6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567374 4829 scope.go:117] "RemoveContainer" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.567793 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": container with ID starting with 433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b not found: ID does not exist" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567812 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b"} err="failed to get container status \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": rpc error: code = NotFound desc = could not find container \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": container with ID starting with 433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567826 4829 scope.go:117] "RemoveContainer" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.568087 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": container with ID starting with b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e not found: ID does not exist" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568162 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e"} err="failed to get container status \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": rpc error: code = NotFound desc = could not find container \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": container with ID starting with b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568228 4829 scope.go:117] "RemoveContainer" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.568771 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": container with ID starting with 93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d not found: ID does not exist" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568854 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d"} err="failed to get container status \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": rpc error: code = NotFound desc = could not find container \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": container with ID starting with 93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568921 4829 scope.go:117] "RemoveContainer" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.569394 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": container with ID starting with 8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503 not found: ID does not exist" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.569442 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503"} err="failed to get container status \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": rpc error: code = NotFound desc = could not find container \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": container with ID starting with 8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503 not found: ID does not exist" Feb 17 15:58:51 crc kubenswrapper[4829]: E0217 15:58:51.091412 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 17 15:58:52 crc kubenswrapper[4829]: I0217 15:58:52.290170 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 15:58:52 crc kubenswrapper[4829]: E0217 15:58:52.946539 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:52 crc kubenswrapper[4829]: I0217 15:58:52.947049 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:52 crc kubenswrapper[4829]: W0217 15:58:52.975709 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e WatchSource:0}: Error finding container 1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e: Status 404 returned error can't find the container with id 1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e Feb 17 15:58:52 crc kubenswrapper[4829]: E0217 15:58:52.980313 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.431114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487"} Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.431458 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e"} Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.432209 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:53 crc kubenswrapper[4829]: E0217 15:58:53.432294 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:54 crc kubenswrapper[4829]: E0217 15:58:54.212816 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:54 crc kubenswrapper[4829]: E0217 15:58:54.293351 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="6.4s" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.404750 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" containerID="cri-o://84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" gracePeriod=15 Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.861986 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.863101 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.863475 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013763 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013871 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013900 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013985 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014013 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014051 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014076 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014127 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.015356 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.015513 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.016289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.016618 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.021858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022166 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022678 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx" (OuterVolumeSpecName: "kube-api-access-vz7qx") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "kube-api-access-vz7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022943 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023092 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023493 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023932 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.024844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.115963 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116048 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116074 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116095 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116113 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116132 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116150 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116169 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116188 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116205 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116223 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116240 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116257 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116275 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.283156 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.283776 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477013 4829 generic.go:334] "Generic (PLEG): container finished" podID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" exitCode=0 Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477065 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerDied","Data":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerDied","Data":"7baa23e27dea651b430693897781e89b000dbe0f94cbc9c61bef0909c8c3ed1a"} Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477909 4829 scope.go:117] "RemoveContainer" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.478959 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.479427 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.485154 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.485798 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.501937 4829 scope.go:117] "RemoveContainer" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: E0217 15:58:58.502298 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": container with ID starting with 84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b not found: ID does not exist" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.502337 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} err="failed to get container status \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": rpc error: code = NotFound desc = could not find container \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": container with ID starting with 84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b not found: ID does not exist" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497297 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497357 4829 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5" exitCode=1 Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497390 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5"} Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497961 4829 scope.go:117] "RemoveContainer" containerID="2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.498424 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.499058 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.499566 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: E0217 15:59:00.694500 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="7s" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.512044 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.512163 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"945ab05d78771985d7fa10f19ef17c18cbbf9d2a96fc24cfe6096156651e53da"} Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.514069 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.514667 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.515412 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.278472 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.279986 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.280647 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.281235 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.305434 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.305493 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:03 crc kubenswrapper[4829]: E0217 15:59:03.306141 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.306781 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: W0217 15:59:03.340553 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5 WatchSource:0}: Error finding container be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5: Status 404 returned error can't find the container with id be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5 Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.527279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5"} Feb 17 15:59:04 crc kubenswrapper[4829]: E0217 15:59:04.213978 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.446793 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.454152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.454796 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.455434 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.455957 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.536539 4829 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bcb56bc01ac126b70d3ba476643d5384f1d58a222170d303030efc4d80185842" exitCode=0 Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.536653 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bcb56bc01ac126b70d3ba476643d5384f1d58a222170d303030efc4d80185842"} Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537189 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537568 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537632 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.538219 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: E0217 15:59:04.538219 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.538814 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.539275 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.558192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f4c1a29b704d808d12087cc63e69a99ff7f44c7ecf17856837e6ce82b593deb"} Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.559210 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc8d6c678e3b71a2f08913ea321b5b856403c5d2299a6a02f3f5f4d2a9de8700"} Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.559325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"28beff336ae2932e57e19638e46f2c1305e41ac5c7252c25229b4295568ab0e2"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.568989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"551796ff3d20fcedb09eb46ccc618e99f54e2af2d65e52d31493da2e84235bd1"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569297 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569304 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"de8cc5433242d2e33aec78e46c3a7546c0edc36b50fa91c0775c9e4f8b6fde9e"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569319 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.570688 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.307385 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.307447 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.314800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:11 crc kubenswrapper[4829]: I0217 15:59:11.580696 4829 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:11 crc kubenswrapper[4829]: I0217 15:59:11.712000 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8dfacaac-c9f1-44a9-8bc9-62b7cf034443" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.602658 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.603129 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.605831 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8dfacaac-c9f1-44a9-8bc9-62b7cf034443" Feb 17 15:59:19 crc kubenswrapper[4829]: I0217 15:59:19.902721 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.013332 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.583858 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.609822 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.724886 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.856015 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.006481 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.129158 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.593411 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.792912 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.945139 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.088713 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.150917 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.289482 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.337384 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.366419 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.443016 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.733066 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.013649 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.194121 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.243225 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.404356 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.411434 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.424193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.432537 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.447945 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.642519 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.700552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.705252 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.725568 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.741197 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.843161 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.889613 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.953625 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.023409 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.100516 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.176375 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.213644 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.318977 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.430989 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.532140 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.532930 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.572525 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.786763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.906292 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.999485 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.018358 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.169806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.236978 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.247665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.332178 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.376232 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.566630 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.601451 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.646204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.764174 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.790462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.894856 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.902791 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.046829 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.167372 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.172420 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.190902 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.266987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.304195 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.327975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.390270 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.455984 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.566811 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.611432 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.656025 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.692595 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.764812 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.792186 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.875450 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.911889 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.922673 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.963373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.059719 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.125391 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.156950 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.194425 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.203856 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.241179 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.252622 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.273884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.343225 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.365163 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.492848 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.493730 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.504727 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.553762 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.695899 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.780341 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.809384 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.842665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.039497 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.067521 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.187828 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.245822 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.246520 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.259798 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.311250 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.375912 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.396477 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.423934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.448676 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.524304 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.659091 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.694475 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.731298 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.766064 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.904649 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.191570 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.255696 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.343715 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.365236 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.426069 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.456340 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.500947 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.510687 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.516836 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.585024 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.794806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.840863 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.855068 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.858775 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.892373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.919387 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.022981 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.028211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.148369 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.214701 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.223520 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.284242 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.301987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.355447 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.500204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.573099 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.619987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.651533 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.657078 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.750064 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.850975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.869107 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.899034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.929073 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.010838 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.020166 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.070847 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.189190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.413541 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.461608 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.613348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.715622 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.827925 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.931090 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.942200 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.945820 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.955267 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.977602 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.078933 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.123617 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.160330 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.161319 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.223852 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.456670 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.459858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.485974 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.514543 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.515619 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.741487 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.871184 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.971589 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.973835 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.149164 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.197754 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.244545 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.259905 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.355134 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.403386 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.406449 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.413540 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.413672 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.419009 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.427906 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.428009 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.432834 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.432817625 podStartE2EDuration="23.432817625s" podCreationTimestamp="2026-02-17 15:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:59:34.431704203 +0000 UTC m=+286.848722211" watchObservedRunningTime="2026-02-17 15:59:34.432817625 +0000 UTC m=+286.849835603" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.572216 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.580981 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.643707 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.658858 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.749380 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.845972 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.102654 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.185672 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.392442 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.574389 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.612233 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.688140 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.748424 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.750187 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.791106 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.852212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.109545 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.272117 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.286980 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" path="/var/lib/kubelet/pods/f1ea7808-ad5e-47ee-a19b-4ece436be60d/volumes" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.351217 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.434047 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.500924 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.540784 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.634078 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.666665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.711194 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.742949 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.850056 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.941154 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.951066 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.017932 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.044672 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 15:59:37 crc kubenswrapper[4829]: E0217 15:59:37.045094 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045167 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: E0217 15:59:37.045229 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045287 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045439 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045508 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045914 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.049654 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.050260 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.050406 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051197 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051652 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051989 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.052392 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053116 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053503 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.054512 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.054866 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.073744 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.074194 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.076717 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.087364 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.147382 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.233877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234391 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234598 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234957 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.336748 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337410 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338772 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338833 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338880 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338880 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338929 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339001 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339068 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339102 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338479 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.340175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.340669 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.344387 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.344464 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346365 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.347048 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.347718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.349389 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.371557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.383002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.648084 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.664482 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.629391 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630100 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630133 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b\\\" Netns:\\\"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 15:59:40 crc kubenswrapper[4829]: I0217 15:59:40.779165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: I0217 15:59:40.779922 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767207 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767880 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767916 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.768013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6\\\" Netns:\\\"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 15:59:45 crc kubenswrapper[4829]: I0217 15:59:45.570448 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:59:45 crc kubenswrapper[4829]: I0217 15:59:45.571305 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" gracePeriod=5 Feb 17 15:59:48 crc kubenswrapper[4829]: I0217 15:59:48.062088 4829 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.840006 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" exitCode=0 Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.840125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.841426 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.850101 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.850381 4829 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" exitCode=137 Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.852502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.852946 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.854656 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.146279 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.146419 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.249972 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250214 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250229 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250254 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250319 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250352 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250880 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250909 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250926 4829 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250943 4829 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.258796 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.352356 4829 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.863860 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.864101 4829 scope.go:117] "RemoveContainer" containerID="b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.864292 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:59:52 crc kubenswrapper[4829]: I0217 15:59:52.290758 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 15:59:54 crc kubenswrapper[4829]: I0217 15:59:54.936470 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:59:55 crc kubenswrapper[4829]: I0217 15:59:55.513381 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.104009 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.145131 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.278898 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.279542 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.306319 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.519481 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:59:58 crc kubenswrapper[4829]: I0217 15:59:58.999371 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563123 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563209 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563242 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563331 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0\\\" Netns:\\\"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206348 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:00 crc kubenswrapper[4829]: E0217 16:00:00.206629 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206644 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206790 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.207269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.209759 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.210354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.225195 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.393827 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.394302 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.394346 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495413 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.497816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.502609 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.517000 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.530412 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.559764 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:00:01 crc kubenswrapper[4829]: I0217 16:00:01.308045 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:00:01 crc kubenswrapper[4829]: I0217 16:00:01.470348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.102408 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.102731 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" containerID="cri-o://335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" gracePeriod=30 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.111056 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.199837 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.200079 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" containerID="cri-o://659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" gracePeriod=30 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.320547 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.496777 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525367 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525396 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526200 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526231 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526644 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config" (OuterVolumeSpecName: "config") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.531077 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.532055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk" (OuterVolumeSpecName: "kube-api-access-5w9jk") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "kube-api-access-5w9jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.553237 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626392 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626426 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626436 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626444 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626453 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.727938 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.728078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.728757 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.729112 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.730103 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.730344 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config" (OuterVolumeSpecName: "config") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.733627 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8" (OuterVolumeSpecName: "kube-api-access-svwh8") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "kube-api-access-svwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.733691 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831715 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831761 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831779 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831797 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.901163 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937316 4829 generic.go:334] "Generic (PLEG): container finished" podID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" exitCode=0 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937397 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerDied","Data":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937499 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerDied","Data":"6a23ac3a0952fee762d7b612b6d50abf950d5b8d2ac6689a55a814e3e26c2a02"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937538 4829 scope.go:117] "RemoveContainer" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939745 4829 generic.go:334] "Generic (PLEG): container finished" podID="16271aa7-2602-467c-b9aa-31c491952eb8" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" exitCode=0 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939789 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerDied","Data":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerDied","Data":"8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939856 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.963816 4829 scope.go:117] "RemoveContainer" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: E0217 16:00:02.964344 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": container with ID starting with 659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40 not found: ID does not exist" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.964411 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} err="failed to get container status \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": rpc error: code = NotFound desc = could not find container \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": container with ID starting with 659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40 not found: ID does not exist" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.964448 4829 scope.go:117] "RemoveContainer" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.994544 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.998635 4829 scope.go:117] "RemoveContainer" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: E0217 16:00:02.999549 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": container with ID starting with 335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026 not found: ID does not exist" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.999771 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} err="failed to get container status \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": rpc error: code = NotFound desc = could not find container \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": container with ID starting with 335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026 not found: ID does not exist" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.006049 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.012919 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.019102 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.460424 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.460980 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.461020 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.461147 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager(5695ec4a-a69a-4e62-9ddd-c9cea43413a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager(5695ec4a-a69a-4e62-9ddd-c9cea43413a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e\\\" Netns:\\\"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod \\\"collect-profiles-29522400-sbp9p\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.565631 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.566252 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566295 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.566340 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566357 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566714 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566763 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.569321 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.571622 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.572796 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.573936 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574520 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574568 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574839 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574995 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.578776 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586252 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586457 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586652 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586834 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.587294 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.587794 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.588426 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.597660 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642838 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642895 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643000 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643023 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744769 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744913 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.747197 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.747262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.748100 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.749838 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.750073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.757430 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.762794 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.774857 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.779122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.903986 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.918760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.950254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.950769 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.287970 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" path="/var/lib/kubelet/pods/16271aa7-2602-467c-b9aa-31c491952eb8/volumes" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.289429 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" path="/var/lib/kubelet/pods/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e/volumes" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.539001 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.105522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.118190 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.280213 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:05 crc kubenswrapper[4829]: W0217 16:00:05.285125 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d0d4bd_3c46_47c4_bc3d_25f039cf2f80.slice/crio-598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836 WatchSource:0}: Error finding container 598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836: Status 404 returned error can't find the container with id 598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836 Feb 17 16:00:05 crc kubenswrapper[4829]: W0217 16:00:05.287332 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a661d1_dfe2_47e8_bf1a_9b4563e546cf.slice/crio-1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749 WatchSource:0}: Error finding container 1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749: Status 404 returned error can't find the container with id 1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749 Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.289370 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.965968 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerStarted","Data":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.966045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerStarted","Data":"1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.966073 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967423 4829 generic.go:334] "Generic (PLEG): container finished" podID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerID="389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d" exitCode=0 Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerDied","Data":"389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerStarted","Data":"c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968550 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerStarted","Data":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerStarted","Data":"598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.975265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.992157 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" podStartSLOduration=3.9921433950000003 podStartE2EDuration="3.992143395s" podCreationTimestamp="2026-02-17 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:05.989186913 +0000 UTC m=+318.406204891" watchObservedRunningTime="2026-02-17 16:00:05.992143395 +0000 UTC m=+318.409161373" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.034001 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.042486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" podStartSLOduration=4.042457312 podStartE2EDuration="4.042457312s" podCreationTimestamp="2026-02-17 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:06.036096825 +0000 UTC m=+318.453114803" watchObservedRunningTime="2026-02-17 16:00:06.042457312 +0000 UTC m=+318.459475330" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.366651 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.123370 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.331523 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389542 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.390820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.397388 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz" (OuterVolumeSpecName: "kube-api-access-sjwfz") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "kube-api-access-sjwfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.398364 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491111 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491170 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491192 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.984996 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerDied","Data":"c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695"} Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.985074 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.985301 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:08 crc kubenswrapper[4829]: I0217 16:00:08.074059 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:00:08 crc kubenswrapper[4829]: I0217 16:00:08.245723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:00:09 crc kubenswrapper[4829]: I0217 16:00:09.375676 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.039702 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.039983 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" containerID="cri-o://0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" gracePeriod=30 Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.062506 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.062824 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" containerID="cri-o://ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" gracePeriod=30 Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.278354 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.409125 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.502716 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.508483 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636454 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636657 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636715 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636774 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636902 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636931 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637596 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637853 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config" (OuterVolumeSpecName: "config") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.638077 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.638266 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config" (OuterVolumeSpecName: "config") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng" (OuterVolumeSpecName: "kube-api-access-bw2ng") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "kube-api-access-bw2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641850 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2" (OuterVolumeSpecName: "kube-api-access-qt5w2") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "kube-api-access-qt5w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641973 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.642841 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.738811 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739102 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739154 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739179 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739206 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739232 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739281 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739301 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739323 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.751494 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 16:00:10 crc kubenswrapper[4829]: W0217 16:00:10.755731 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ddca7e_d4a1_4a03_95d2_6c3b1c2ba6c4.slice/crio-9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286 WatchSource:0}: Error finding container 9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286: Status 404 returned error can't find the container with id 9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.006745 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" event={"ID":"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4","Type":"ContainerStarted","Data":"9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009199 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" exitCode=0 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerDied","Data":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009323 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009347 4829 scope.go:117] "RemoveContainer" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009328 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerDied","Data":"598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.012996 4829 generic.go:334] "Generic (PLEG): container finished" podID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" exitCode=0 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013071 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerDied","Data":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerDied","Data":"1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.044704 4829 scope.go:117] "RemoveContainer" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.045398 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": container with ID starting with 0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818 not found: ID does not exist" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.045453 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} err="failed to get container status \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": rpc error: code = NotFound desc = could not find container \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": container with ID starting with 0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818 not found: ID does not exist" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.045490 4829 scope.go:117] "RemoveContainer" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.065995 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.070356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.079216 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.081652 4829 scope.go:117] "RemoveContainer" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.082216 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": container with ID starting with ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509 not found: ID does not exist" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.082271 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} err="failed to get container status \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": rpc error: code = NotFound desc = could not find container \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": container with ID starting with ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509 not found: ID does not exist" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.085361 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581727 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581748 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581772 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581784 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581805 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581819 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582034 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582068 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582092 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582780 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.588740 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.588985 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.589068 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590555 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590856 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590915 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.600450 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.600474 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.602206 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.612650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615269 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615453 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615827 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.616381 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.616765 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.620774 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.625292 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752603 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752655 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752767 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752900 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752941 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752992 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753281 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.854893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855103 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855232 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855264 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855323 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.856435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.857139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.857891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.858727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.865178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.867666 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.874765 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.883252 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.886179 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.905507 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.934293 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.060021 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" event={"ID":"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4","Type":"ContainerStarted","Data":"1f8f34da87ac3541d3268f757fb3317046bad80af6ec5c1cf136c6d5d053a8f6"} Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.061272 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.069933 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.130245 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podStartSLOduration=100.130204617 podStartE2EDuration="1m40.130204617s" podCreationTimestamp="2026-02-17 15:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:12.106080863 +0000 UTC m=+324.523098851" watchObservedRunningTime="2026-02-17 16:00:12.130204617 +0000 UTC m=+324.547222595" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.186564 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.289825 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" path="/var/lib/kubelet/pods/87a661d1-dfe2-47e8-bf1a-9b4563e546cf/volumes" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.291080 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" path="/var/lib/kubelet/pods/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80/volumes" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.468979 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.067778 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerStarted","Data":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.068119 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerStarted","Data":"9296fc8a05e64c8caca2c8a1392a0740bf17e8421ebdcec6c4d6a1bf074bfb8e"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069401 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069882 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" event={"ID":"ff2dc4ce-73aa-4af1-92bc-480766efec5f","Type":"ContainerStarted","Data":"38d3ac6eefa5fb175f4e1a9e6d36087b28207773546c2cb8c6b7e2ee19de20c8"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" event={"ID":"ff2dc4ce-73aa-4af1-92bc-480766efec5f","Type":"ContainerStarted","Data":"00d5067c34eb9a6b8d3c5bd1bf0a4b1a860ef0999178895bd60d6e2c48490c9f"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.070308 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.073003 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.077326 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.091702 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" podStartSLOduration=3.091684154 podStartE2EDuration="3.091684154s" podCreationTimestamp="2026-02-17 16:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:13.089146033 +0000 UTC m=+325.506164031" watchObservedRunningTime="2026-02-17 16:00:13.091684154 +0000 UTC m=+325.508702132" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.103514 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" podStartSLOduration=3.103494394 podStartE2EDuration="3.103494394s" podCreationTimestamp="2026-02-17 16:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:13.10192145 +0000 UTC m=+325.518939428" watchObservedRunningTime="2026-02-17 16:00:13.103494394 +0000 UTC m=+325.520512382" Feb 17 16:00:18 crc kubenswrapper[4829]: I0217 16:00:18.008781 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.118723 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.119517 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" containerID="cri-o://68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" gracePeriod=30 Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.598861 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.713913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714189 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714857 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca" (OuterVolumeSpecName: "client-ca") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714875 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config" (OuterVolumeSpecName: "config") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715281 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715329 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715351 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.719203 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926" (OuterVolumeSpecName: "kube-api-access-9s926") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "kube-api-access-9s926". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.723677 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.816965 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.817014 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159714 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" exitCode=0 Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerDied","Data":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159790 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerDied","Data":"9296fc8a05e64c8caca2c8a1392a0740bf17e8421ebdcec6c4d6a1bf074bfb8e"} Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159813 4829 scope.go:117] "RemoveContainer" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159939 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.186105 4829 scope.go:117] "RemoveContainer" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: E0217 16:00:25.186660 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": container with ID starting with 68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c not found: ID does not exist" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.186710 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} err="failed to get container status \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": rpc error: code = NotFound desc = could not find container \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": container with ID starting with 68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c not found: ID does not exist" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.187524 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.191483 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581056 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:25 crc kubenswrapper[4829]: E0217 16:00:25.581413 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581443 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581637 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.582177 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.584278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.584941 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.585879 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.586459 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.587582 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.588661 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.597606 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.598484 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.632841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735065 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735728 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.736213 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.737026 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.737088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.739554 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.763162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.912085 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:26 crc kubenswrapper[4829]: I0217 16:00:26.289440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" path="/var/lib/kubelet/pods/0bb5db83-ef1f-4e88-9d1c-d01334049378/volumes" Feb 17 16:00:26 crc kubenswrapper[4829]: I0217 16:00:26.301911 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerStarted","Data":"415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d"} Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172297 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerStarted","Data":"beacd0d0ef6626d35fb52988e3bbd5f44ad53ca81aceba78081f2a53436b10ca"} Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172681 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.177621 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.189700 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" podStartSLOduration=3.189682865 podStartE2EDuration="3.189682865s" podCreationTimestamp="2026-02-17 16:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:27.186700822 +0000 UTC m=+339.603718820" watchObservedRunningTime="2026-02-17 16:00:27.189682865 +0000 UTC m=+339.606700843" Feb 17 16:00:52 crc kubenswrapper[4829]: I0217 16:00:52.424678 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:00:52 crc kubenswrapper[4829]: I0217 16:00:52.426037 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.074649 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.076797 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" containerID="cri-o://415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" gracePeriod=30 Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.397368 4829 generic.go:334] "Generic (PLEG): container finished" podID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerID="415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" exitCode=0 Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.397466 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerDied","Data":"415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d"} Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.493962 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603753 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603950 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603972 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.604658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.604837 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config" (OuterVolumeSpecName: "config") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.605167 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.608702 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.608878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn" (OuterVolumeSpecName: "kube-api-access-d95xn") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "kube-api-access-d95xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705351 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705402 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705410 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705420 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705429 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.183772 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:03 crc kubenswrapper[4829]: E0217 16:01:03.184059 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.184076 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.185606 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.186088 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.213626 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314252 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314642 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314675 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314709 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.340114 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerDied","Data":"beacd0d0ef6626d35fb52988e3bbd5f44ad53ca81aceba78081f2a53436b10ca"} Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405099 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405345 4829 scope.go:117] "RemoveContainer" containerID="415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415397 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415442 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415468 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415502 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415528 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415549 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.417096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.417331 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.418086 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.419361 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.419803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.432820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.435475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.476069 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.479672 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.501263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.609338 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.610172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.615689 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.615965 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616472 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616823 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.617067 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.623647 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.627220 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722169 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722370 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722401 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823574 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823753 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.824963 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.825347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.826926 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.830868 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.849516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.934424 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.944566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.157740 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:04 crc kubenswrapper[4829]: W0217 16:01:04.163198 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f31a99f_549f_4e80_b051_ce65bbe55c09.slice/crio-f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb WatchSource:0}: Error finding container f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb: Status 404 returned error can't find the container with id f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.285708 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" path="/var/lib/kubelet/pods/0d4e94d2-8fbf-47b1-acd8-b79b18470a25/volumes" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412084 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" event={"ID":"5eaf5db2-3348-4197-b96d-bf04627f6aae","Type":"ContainerStarted","Data":"717b178b185b59b96ad734a9d09feb405a12579b5e7b499ed809d2d545b77f09"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412137 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" event={"ID":"5eaf5db2-3348-4197-b96d-bf04627f6aae","Type":"ContainerStarted","Data":"a963822f6ecfd6ed23945d6354924eb6a8af70006e2f0e6e7b4488d03be0d21f"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412185 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.413487 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" event={"ID":"0f31a99f-549f-4e80-b051-ce65bbe55c09","Type":"ContainerStarted","Data":"010d62862df4f79ef60ebc758961f663abdc107f0cb4ac7d0d619c04a67c0d8e"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.413539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" event={"ID":"0f31a99f-549f-4e80-b051-ce65bbe55c09","Type":"ContainerStarted","Data":"f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.414369 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.422540 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.439691 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" podStartSLOduration=1.439667171 podStartE2EDuration="1.439667171s" podCreationTimestamp="2026-02-17 16:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:04.434527558 +0000 UTC m=+376.851545556" watchObservedRunningTime="2026-02-17 16:01:04.439667171 +0000 UTC m=+376.856685159" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.460688 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" podStartSLOduration=2.46067037 podStartE2EDuration="2.46067037s" podCreationTimestamp="2026-02-17 16:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:04.459354173 +0000 UTC m=+376.876372151" watchObservedRunningTime="2026-02-17 16:01:04.46067037 +0000 UTC m=+376.877688348" Feb 17 16:01:22 crc kubenswrapper[4829]: I0217 16:01:22.425078 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:01:22 crc kubenswrapper[4829]: I0217 16:01:22.427086 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:23 crc kubenswrapper[4829]: I0217 16:01:23.516268 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:23 crc kubenswrapper[4829]: I0217 16:01:23.602107 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.658755 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.659958 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4qsx" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" containerID="cri-o://a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.666191 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.666795 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plxhn" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" containerID="cri-o://9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.672061 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.672240 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" containerID="cri-o://c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.701215 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.701548 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lg78k" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" containerID="cri-o://7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.712057 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.712381 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pzvbr" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" containerID="cri-o://cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.717929 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.718872 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.723498 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896734 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896794 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896838 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.998997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.999065 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.999146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.001494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.017562 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.023241 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.185414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.197871 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303302 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303366 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303440 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.305121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities" (OuterVolumeSpecName: "utilities") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.307512 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt" (OuterVolumeSpecName: "kube-api-access-k6kjt") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "kube-api-access-k6kjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.350877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410226 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410268 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410287 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.421180 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.427488 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.443445 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511450 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.512717 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.516107 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.531527 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8" (OuterVolumeSpecName: "kube-api-access-m2ld8") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "kube-api-access-m2ld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603361 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603482 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"e87972fe228716c21ec7cecb1607e14e50dea5013a2a6768e543463984d2ebe1"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603429 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603498 4829 scope.go:117] "RemoveContainer" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.606249 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.606313 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608275 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608354 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608372 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"d19f6da1913041c5fd10e98efa71ae0ed6c2d8facfc11c2aa17840a88a15c77f"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610022 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610069 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610088 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612100 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612214 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612353 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613137 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613163 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613314 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.616328 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx" (OuterVolumeSpecName: "kube-api-access-slsbx") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "kube-api-access-slsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.616735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities" (OuterVolumeSpecName: "utilities") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.619378 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities" (OuterVolumeSpecName: "utilities") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620873 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620948 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"9f6b76db525ea1716f4c1ce5158f77a01ac87265be5d53578be8975ef1a1c0b8"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620982 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.622836 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.624761 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5" (OuterVolumeSpecName: "kube-api-access-6jrd5") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "kube-api-access-6jrd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.629359 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.652785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.653348 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655077 4829 scope.go:117] "RemoveContainer" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.655450 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": container with ID starting with c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43 not found: ID does not exist" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655617 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} err="failed to get container status \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": rpc error: code = NotFound desc = could not find container \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": container with ID starting with c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655774 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.656153 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": container with ID starting with 21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39 not found: ID does not exist" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.656194 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} err="failed to get container status \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": rpc error: code = NotFound desc = could not find container \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": container with ID starting with 21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.656221 4829 scope.go:117] "RemoveContainer" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.661505 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.688398 4829 scope.go:117] "RemoveContainer" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.688961 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.690329 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.705617 4829 scope.go:117] "RemoveContainer" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714603 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714663 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714676 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714687 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714705 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.726747 4829 scope.go:117] "RemoveContainer" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727057 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": container with ID starting with 7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650 not found: ID does not exist" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727105 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} err="failed to get container status \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": rpc error: code = NotFound desc = could not find container \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": container with ID starting with 7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727134 4829 scope.go:117] "RemoveContainer" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727408 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": container with ID starting with 75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b not found: ID does not exist" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727439 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b"} err="failed to get container status \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": rpc error: code = NotFound desc = could not find container \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": container with ID starting with 75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727457 4829 scope.go:117] "RemoveContainer" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727789 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": container with ID starting with 29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186 not found: ID does not exist" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727827 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186"} err="failed to get container status \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": rpc error: code = NotFound desc = could not find container \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": container with ID starting with 29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727857 4829 scope.go:117] "RemoveContainer" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.728189 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.744898 4829 scope.go:117] "RemoveContainer" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.787115 4829 scope.go:117] "RemoveContainer" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.803173 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.815534 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.831590 4829 scope.go:117] "RemoveContainer" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.831962 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": container with ID starting with cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582 not found: ID does not exist" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.831987 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} err="failed to get container status \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": rpc error: code = NotFound desc = could not find container \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": container with ID starting with cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832009 4829 scope.go:117] "RemoveContainer" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.832183 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": container with ID starting with a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e not found: ID does not exist" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832202 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} err="failed to get container status \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": rpc error: code = NotFound desc = could not find container \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": container with ID starting with a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832228 4829 scope.go:117] "RemoveContainer" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.832568 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": container with ID starting with 223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072 not found: ID does not exist" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832596 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072"} err="failed to get container status \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": rpc error: code = NotFound desc = could not find container \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": container with ID starting with 223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832609 4829 scope.go:117] "RemoveContainer" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.847429 4829 scope.go:117] "RemoveContainer" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.863405 4829 scope.go:117] "RemoveContainer" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882396 4829 scope.go:117] "RemoveContainer" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.882769 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": container with ID starting with a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb not found: ID does not exist" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882799 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} err="failed to get container status \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": rpc error: code = NotFound desc = could not find container \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": container with ID starting with a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882822 4829 scope.go:117] "RemoveContainer" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.883231 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": container with ID starting with 954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213 not found: ID does not exist" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883265 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} err="failed to get container status \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": rpc error: code = NotFound desc = could not find container \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": container with ID starting with 954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883289 4829 scope.go:117] "RemoveContainer" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.883679 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": container with ID starting with 0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f not found: ID does not exist" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883706 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f"} err="failed to get container status \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": rpc error: code = NotFound desc = could not find container \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": container with ID starting with 0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.916948 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.926743 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.926864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.928032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities" (OuterVolumeSpecName: "utilities") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.932762 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z" (OuterVolumeSpecName: "kube-api-access-qwm5z") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "kube-api-access-qwm5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.942991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.949752 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.954344 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.957939 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.000646 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028439 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028477 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028492 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.287768 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" path="/var/lib/kubelet/pods/980a7ff9-af1a-413c-8573-00243ed3ece1/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.288700 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" path="/var/lib/kubelet/pods/bedc9476-2a16-46d6-8764-8fd184304b5f/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.289815 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" path="/var/lib/kubelet/pods/d8370c4f-c05e-425c-a267-c270e36b5dfd/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.291245 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" path="/var/lib/kubelet/pods/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629353 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629571 4829 scope.go:117] "RemoveContainer" containerID="9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629693 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" event={"ID":"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9","Type":"ContainerStarted","Data":"3e83c4edbbeb93deede15ac765b6c7670a4281956550ec4df0e58589b435f965"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" event={"ID":"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9","Type":"ContainerStarted","Data":"2c79ddcdad8cf2554a1531b0732434356c8c56c3cd2c10b167b2192c19a52ed6"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631437 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.638094 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.651939 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.653162 4829 scope.go:117] "RemoveContainer" containerID="6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.657257 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.679566 4829 scope.go:117] "RemoveContainer" containerID="8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.680468 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" podStartSLOduration=2.680451236 podStartE2EDuration="2.680451236s" podCreationTimestamp="2026-02-17 16:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:28.677115449 +0000 UTC m=+401.094133447" watchObservedRunningTime="2026-02-17 16:01:28.680451236 +0000 UTC m=+401.097469234" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.874501 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875169 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875194 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875211 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875220 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875228 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875235 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875247 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875254 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875265 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875272 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875279 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875286 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875299 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875307 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875317 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875325 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875335 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875342 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875351 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875358 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875368 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875374 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875386 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875394 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875409 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875416 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875526 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875539 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875551 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875562 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875577 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875693 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875704 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875870 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.877788 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.879671 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.884581 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044678 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044868 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.079679 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.081179 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.087002 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.095411 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146167 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146303 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.147169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.147642 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.163005 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.240032 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.247907 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.248118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.248296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350720 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350880 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350909 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.352066 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.352334 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.370675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.398417 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.693968 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:29 crc kubenswrapper[4829]: W0217 16:01:29.706303 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b134949_3436_4e61_9649_5704b6bcb7fd.slice/crio-28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437 WatchSource:0}: Error finding container 28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437: Status 404 returned error can't find the container with id 28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437 Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.787753 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: W0217 16:01:29.796329 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1207e9e_0755_423d_9a3d_b83ded02c8c2.slice/crio-510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547 WatchSource:0}: Error finding container 510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547: Status 404 returned error can't find the container with id 510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.290629 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" path="/var/lib/kubelet/pods/2a5cfa35-799d-41b4-afa1-e5d056ceed8c/volumes" Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.651936 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b134949-3436-4e61-9649-5704b6bcb7fd" containerID="b75d79935bed5c3439e427ae88375c4f1bcc50e276aea79ec67d6126fd2e6c71" exitCode=0 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.651985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerDied","Data":"b75d79935bed5c3439e427ae88375c4f1bcc50e276aea79ec67d6126fd2e6c71"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.652023 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerStarted","Data":"28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653663 4829 generic.go:334] "Generic (PLEG): container finished" podID="b1207e9e-0755-423d-9a3d-b83ded02c8c2" containerID="a0c5e4f1c9b6225d700d459d6678a80a5e30a4f6a8a64b96aaca4c353297cd9d" exitCode=0 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerDied","Data":"a0c5e4f1c9b6225d700d459d6678a80a5e30a4f6a8a64b96aaca4c353297cd9d"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653761 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.279774 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.281348 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.284876 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.294162 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.476726 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477358 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477416 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477438 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.478074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.480832 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.498256 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578158 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578234 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578334 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578380 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.579026 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.596785 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.659728 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.661987 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.662444 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b134949-3436-4e61-9649-5704b6bcb7fd" containerID="aa36779be39aa726f4da4e9126cfdc1b11c13a0995a40ba9c5cfac2963fa23c6" exitCode=0 Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.662562 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerDied","Data":"aa36779be39aa726f4da4e9126cfdc1b11c13a0995a40ba9c5cfac2963fa23c6"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.679760 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680823 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.700033 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:31.803109 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.672540 4829 generic.go:334] "Generic (PLEG): container finished" podID="b1207e9e-0755-423d-9a3d-b83ded02c8c2" containerID="6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9" exitCode=0 Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.672775 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerDied","Data":"6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9"} Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.705092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:32 crc kubenswrapper[4829]: W0217 16:01:32.709902 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92bf9e45_4314_4bab_8fda_e0fbf0e5e2b3.slice/crio-bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4 WatchSource:0}: Error finding container bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4: Status 404 returned error can't find the container with id bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4 Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.722045 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.679472 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerStarted","Data":"bcf7a7749f6b8b487dc8900e4efc7d463ece516d429a7fc61622c5ad830e92b3"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.682073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"9a2fd2f20644c0e7382ce5a04a739ef5064ff225acf34d2feda69f9852e192ac"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683119 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" exitCode=0 Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683152 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683174 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685020 4829 generic.go:334] "Generic (PLEG): container finished" podID="65b3d23b-0d04-496a-9dbb-fb4ed59d313b" containerID="670291e11b65c31fc36061561f528177efcf34e72dacd5cce0d0b9604697fee6" exitCode=0 Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685055 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerDied","Data":"670291e11b65c31fc36061561f528177efcf34e72dacd5cce0d0b9604697fee6"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"49955cf127697addfddd5d1a4907c67cebb9bc250fbd09a8f01eda5cf86ea055"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.700806 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2sjn" podStartSLOduration=3.284217559 podStartE2EDuration="5.700790844s" podCreationTimestamp="2026-02-17 16:01:28 +0000 UTC" firstStartedPulling="2026-02-17 16:01:30.655396682 +0000 UTC m=+403.072414700" lastFinishedPulling="2026-02-17 16:01:33.071970007 +0000 UTC m=+405.488987985" observedRunningTime="2026-02-17 16:01:33.698796926 +0000 UTC m=+406.115814904" watchObservedRunningTime="2026-02-17 16:01:33.700790844 +0000 UTC m=+406.117808812" Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.715981 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h59n9" podStartSLOduration=2.157278156 podStartE2EDuration="4.715960256s" podCreationTimestamp="2026-02-17 16:01:29 +0000 UTC" firstStartedPulling="2026-02-17 16:01:30.655367791 +0000 UTC m=+403.072385769" lastFinishedPulling="2026-02-17 16:01:33.214049891 +0000 UTC m=+405.631067869" observedRunningTime="2026-02-17 16:01:33.714528384 +0000 UTC m=+406.131546362" watchObservedRunningTime="2026-02-17 16:01:33.715960256 +0000 UTC m=+406.132978234" Feb 17 16:01:34 crc kubenswrapper[4829]: I0217 16:01:34.690595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} Feb 17 16:01:34 crc kubenswrapper[4829]: I0217 16:01:34.693261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520"} Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.698874 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" exitCode=0 Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.698963 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.701343 4829 generic.go:334] "Generic (PLEG): container finished" podID="65b3d23b-0d04-496a-9dbb-fb4ed59d313b" containerID="db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520" exitCode=0 Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.701377 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerDied","Data":"db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.710463 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"a9926dc89992ffbb3cc636334f0bc2a8a639030228c812b7325445578eceba50"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.712906 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.728072 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvk9j" podStartSLOduration=3.209178967 podStartE2EDuration="5.728049949s" podCreationTimestamp="2026-02-17 16:01:31 +0000 UTC" firstStartedPulling="2026-02-17 16:01:33.68759795 +0000 UTC m=+406.104615918" lastFinishedPulling="2026-02-17 16:01:36.206468902 +0000 UTC m=+408.623486900" observedRunningTime="2026-02-17 16:01:36.726608067 +0000 UTC m=+409.143626045" watchObservedRunningTime="2026-02-17 16:01:36.728049949 +0000 UTC m=+409.145067927" Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.751469 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqfvj" podStartSLOduration=3.288748902 podStartE2EDuration="5.75145382s" podCreationTimestamp="2026-02-17 16:01:31 +0000 UTC" firstStartedPulling="2026-02-17 16:01:33.68451326 +0000 UTC m=+406.101531238" lastFinishedPulling="2026-02-17 16:01:36.147218188 +0000 UTC m=+408.564236156" observedRunningTime="2026-02-17 16:01:36.748125483 +0000 UTC m=+409.165143461" watchObservedRunningTime="2026-02-17 16:01:36.75145382 +0000 UTC m=+409.168471798" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.240258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.240614 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.292956 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.399625 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.399873 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.438328 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.770778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.779144 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.662210 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.663301 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.710487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.776989 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.803451 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.803629 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.847351 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:42 crc kubenswrapper[4829]: I0217 16:01:42.789110 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.659499 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" containerID="cri-o://37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" gracePeriod=30 Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.799628 4829 generic.go:334] "Generic (PLEG): container finished" podID="dc817ced-7abe-422d-af13-779118b5fe0f" containerID="37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" exitCode=0 Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.799639 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerDied","Data":"37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b"} Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.089623 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226475 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226542 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226763 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226831 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226889 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226940 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.227686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.227727 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.232788 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.234055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.236545 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.236545 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.238895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g" (OuterVolumeSpecName: "kube-api-access-nxg2g") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "kube-api-access-nxg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.242897 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328137 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328185 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328208 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328228 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328245 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328261 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328282 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.806868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerDied","Data":"e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542"} Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.806910 4829 scope.go:117] "RemoveContainer" containerID="37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.807010 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.857645 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.863526 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:50 crc kubenswrapper[4829]: I0217 16:01:50.289772 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" path="/var/lib/kubelet/pods/dc817ced-7abe-422d-af13-779118b5fe0f/volumes" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424755 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424828 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.425443 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.425502 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" gracePeriod=600 Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.827912 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" exitCode=0 Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828003 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828263 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828290 4829 scope.go:117] "RemoveContainer" containerID="e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.743550 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:01:59 crc kubenswrapper[4829]: E0217 16:01:59.744851 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.744884 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.745190 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.746112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752307 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752682 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.754070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.754432 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.755122 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875664 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.978501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.985883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.995188 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.077906 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.515521 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.890287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" event={"ID":"6cefa21f-9e59-4010-ad20-b8e03cf353bf","Type":"ContainerStarted","Data":"0d803e081171f0fdf381a62bffe3d2d8eedba8c413c242abf0a94f07bb34bcc6"} Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.901925 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.902994 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.903846 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" event={"ID":"6cefa21f-9e59-4010-ad20-b8e03cf353bf","Type":"ContainerStarted","Data":"d1b1543149dadfea086e9cdabc894c26e75a4b9a196b98f736069a00ce8de741"} Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.906070 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-82jtk" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.906262 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.921498 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.947693 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" podStartSLOduration=2.144602704 podStartE2EDuration="3.947673739s" podCreationTimestamp="2026-02-17 16:01:59 +0000 UTC" firstStartedPulling="2026-02-17 16:02:00.524305925 +0000 UTC m=+432.941323943" lastFinishedPulling="2026-02-17 16:02:02.327377 +0000 UTC m=+434.744394978" observedRunningTime="2026-02-17 16:02:02.945527347 +0000 UTC m=+435.362545315" watchObservedRunningTime="2026-02-17 16:02:02.947673739 +0000 UTC m=+435.364691717" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.016439 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.117901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: E0217 16:02:03.118046 4829 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:02:03 crc kubenswrapper[4829]: E0217 16:02:03.118116 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates podName:728a0007-d901-4c84-aa7d-13a845147d80 nodeName:}" failed. No retries permitted until 2026-02-17 16:02:03.618096438 +0000 UTC m=+436.035114426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lrr94" (UID: "728a0007-d901-4c84-aa7d-13a845147d80") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.624959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.632730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.816728 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:04 crc kubenswrapper[4829]: I0217 16:02:04.081269 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:04 crc kubenswrapper[4829]: I0217 16:02:04.922962 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" event={"ID":"728a0007-d901-4c84-aa7d-13a845147d80","Type":"ContainerStarted","Data":"1cd6e487380d264ff565e75d4d8ef446ab7e75727b950fa9760858b1c7c2fea3"} Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.930441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" event={"ID":"728a0007-d901-4c84-aa7d-13a845147d80","Type":"ContainerStarted","Data":"7a70d732a62c33929e736cfd50090b9f5f9258478f9e6bc747a698b122b8489f"} Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.930904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.940211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.950302 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" podStartSLOduration=2.337651954 podStartE2EDuration="3.950287697s" podCreationTimestamp="2026-02-17 16:02:02 +0000 UTC" firstStartedPulling="2026-02-17 16:02:04.087548996 +0000 UTC m=+436.504566994" lastFinishedPulling="2026-02-17 16:02:05.700184759 +0000 UTC m=+438.117202737" observedRunningTime="2026-02-17 16:02:05.94592911 +0000 UTC m=+438.362947098" watchObservedRunningTime="2026-02-17 16:02:05.950287697 +0000 UTC m=+438.367305685" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.009828 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.010638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013038 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013296 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013952 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-zqv84" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.025213 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.097948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098038 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098117 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199860 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199894 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: E0217 16:02:07.199998 4829 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 16:02:07 crc kubenswrapper[4829]: E0217 16:02:07.200050 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls podName:bb5ca468-da43-4076-b607-21a3a3799c55 nodeName:}" failed. No retries permitted until 2026-02-17 16:02:07.7000296 +0000 UTC m=+440.117047578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls") pod "prometheus-operator-db54df47d-nrldr" (UID: "bb5ca468-da43-4076-b607-21a3a3799c55") : secret "prometheus-operator-tls" not found Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.200900 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.222400 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.223145 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.706689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.721267 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.925821 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:08 crc kubenswrapper[4829]: I0217 16:02:08.379614 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:08 crc kubenswrapper[4829]: W0217 16:02:08.382754 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb5ca468_da43_4076_b607_21a3a3799c55.slice/crio-a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15 WatchSource:0}: Error finding container a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15: Status 404 returned error can't find the container with id a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15 Feb 17 16:02:08 crc kubenswrapper[4829]: I0217 16:02:08.951556 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.962678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"7f3b14e607153a2972e1f1e90a136cf52bd5328f5de3675740e42c522750e0c1"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.963361 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"2117fd359e56760977e0aba46c4265804b49ec407e0f222862b3897c8c0232f0"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.990014 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" podStartSLOduration=3.35964792 podStartE2EDuration="4.989996788s" podCreationTimestamp="2026-02-17 16:02:06 +0000 UTC" firstStartedPulling="2026-02-17 16:02:08.385184296 +0000 UTC m=+440.802202274" lastFinishedPulling="2026-02-17 16:02:10.015533164 +0000 UTC m=+442.432551142" observedRunningTime="2026-02-17 16:02:10.983684255 +0000 UTC m=+443.400702283" watchObservedRunningTime="2026-02-17 16:02:10.989996788 +0000 UTC m=+443.407014776" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.347831 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.349021 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353484 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353712 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-97ncs" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353852 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.355008 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.375096 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.376343 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382637 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382973 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.383034 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q62sj" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.395789 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.446376 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-hww7w"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.447365 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.449522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.449663 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.455832 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gcggn" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.490952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491347 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491453 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491567 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491683 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492025 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492058 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492074 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593281 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593347 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593415 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593434 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593478 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593496 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593863 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593859 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593989 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.594691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.594689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.595207 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.599964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.600842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.601694 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.602898 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.613993 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.614495 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.671901 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.694694 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695190 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.694942 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.699544 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.699545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.713354 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.762773 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.983917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"caef9da3426de438b5353f2604f619d63a795417f75c2a7ef8a37a75d97991be"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.079010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:14 crc kubenswrapper[4829]: W0217 16:02:14.083153 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c36ac2a_a1c8_4e56_a6fd_077e321dbeb0.slice/crio-3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291 WatchSource:0}: Error finding container 3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291: Status 404 returned error can't find the container with id 3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291 Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.136943 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:14 crc kubenswrapper[4829]: W0217 16:02:14.142180 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod556c56e9_a5b5_4038_a036_176255a8d491.slice/crio-7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd WatchSource:0}: Error finding container 7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd: Status 404 returned error can't find the container with id 7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.410263 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.412175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.416385 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420221 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420402 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.421075 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.421228 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.422664 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-dd55m" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.423050 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.424965 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.446179 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510588 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510731 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510937 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511001 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612318 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612362 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612400 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612428 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612448 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.613208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.613702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.614597 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617231 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617703 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.618767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.619024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.619966 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.620433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.621324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.628564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.732625 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.993949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"f3a029cb5f5ac465316b3fdcbc5bfeee9a734902b2ea8c58f62be1b62341cda5"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997630 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"c9f4c81ba3712eba7fbe0f174f70aec6a812e8bc5cf3462612206c40e4b84968"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd"} Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.208071 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.429921 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.439620 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.439965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.443767 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444084 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444416 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-ag8cv1l60vbo7" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444561 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444835 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4ss68" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444759 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529878 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529944 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529966 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530062 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530253 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: W0217 16:02:15.589306 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed9f3be_0a53_4ab0_98d0_7f3644b24cab.slice/crio-775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62 WatchSource:0}: Error finding container 775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62: Status 404 returned error can't find the container with id 775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62 Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631654 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631746 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631822 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.633211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.636733 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.639218 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.640484 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.641088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.644899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.645458 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.653175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.770807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:16 crc kubenswrapper[4829]: I0217 16:02:16.028820 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62"} Feb 17 16:02:16 crc kubenswrapper[4829]: I0217 16:02:16.366901 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:16 crc kubenswrapper[4829]: W0217 16:02:16.376959 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf29c87_fafc_4650_9e33_9a12afaacff2.slice/crio-6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9 WatchSource:0}: Error finding container 6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9: Status 404 returned error can't find the container with id 6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9 Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.044330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.046836 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"91a90ae48ff47b7a38a1b1567709e1ceb8a7b36169b06f36b5c51683e653d9bf"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.049013 4829 generic.go:334] "Generic (PLEG): container finished" podID="d943ca51-64b2-4a03-a7cd-9fdc430742a5" containerID="0e5d88d101bc75b54345559672b0940d377e7c4ec415bbf75091b74dace05853" exitCode=0 Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.049151 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerDied","Data":"0e5d88d101bc75b54345559672b0940d377e7c4ec415bbf75091b74dace05853"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058226 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"2874148a0a9604029daeda794ece867eb6a4d34044e6495008a35805db480a58"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058329 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"bdf680ddbe2042baa65364ea0790d22eb955450941e135a37ec4cb0478856685"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"2cbc7f05e03100e5f030e25c30b13de0fc17c86d99293c08f284f9d67461e53c"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.066449 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" podStartSLOduration=2.5544200889999997 podStartE2EDuration="4.066418995s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:14.443868706 +0000 UTC m=+446.860886684" lastFinishedPulling="2026-02-17 16:02:15.955867602 +0000 UTC m=+448.372885590" observedRunningTime="2026-02-17 16:02:17.065342484 +0000 UTC m=+449.482360462" watchObservedRunningTime="2026-02-17 16:02:17.066418995 +0000 UTC m=+449.483436973" Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.106338 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" podStartSLOduration=2.293199559 podStartE2EDuration="4.106312776s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:14.085866439 +0000 UTC m=+446.502884427" lastFinishedPulling="2026-02-17 16:02:15.898979666 +0000 UTC m=+448.315997644" observedRunningTime="2026-02-17 16:02:17.083936355 +0000 UTC m=+449.500954353" watchObservedRunningTime="2026-02-17 16:02:17.106312776 +0000 UTC m=+449.523330744" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.066793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"c11b32d64a3f66100a0165920c49ce76a1a62f5950c031c2f7d9ea1cc4115fdc"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.067245 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"1bcc54d9357e5cc7264fcded6f2e7889686ca8dea70f6549d65e017a70c7c568"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.068091 4829 generic.go:334] "Generic (PLEG): container finished" podID="6ed9f3be-0a53-4ab0-98d0-7f3644b24cab" containerID="6b424b27de387b02b4b52768ced291fe81d653efeffb0de595f53abb04a48b44" exitCode=0 Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.068141 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerDied","Data":"6b424b27de387b02b4b52768ced291fe81d653efeffb0de595f53abb04a48b44"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.104288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-hww7w" podStartSLOduration=3.734504248 podStartE2EDuration="5.104270084s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:13.80679773 +0000 UTC m=+446.223815708" lastFinishedPulling="2026-02-17 16:02:15.176563566 +0000 UTC m=+447.593581544" observedRunningTime="2026-02-17 16:02:18.093346556 +0000 UTC m=+450.510364534" watchObservedRunningTime="2026-02-17 16:02:18.104270084 +0000 UTC m=+450.521288072" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.177053 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.178161 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.236298 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273265 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273418 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273500 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375340 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375360 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.376633 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.376688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.377112 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.378989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.381184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.382133 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.391865 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.505167 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.556755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.557562 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560455 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560456 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-627cz" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560911 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fkhkec7ff3h1k" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.564462 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.570150 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.582082 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.684993 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.685051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686106 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686150 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686209 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787311 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787701 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787761 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787811 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787899 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.788772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.789221 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.789643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.793147 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.793475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.794381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.812201 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.886448 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.027521 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:19 crc kubenswrapper[4829]: W0217 16:02:19.048379 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b2f8413_6a54_4bef_a63e_f2b278f57a6d.slice/crio-bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd WatchSource:0}: Error finding container bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd: Status 404 returned error can't find the container with id bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.075018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerStarted","Data":"bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.077651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"6ca5a643784c8c5367f3e65a1fd29d033304a15413638a481bdc97d04027bd70"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.077671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"f598151b685b169b84e85f8d23310056f43371ae6cd306df0ed7cd0b72b8789f"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.139324 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.140284 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.146923 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.147083 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.149694 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.295279 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.304005 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:19 crc kubenswrapper[4829]: W0217 16:02:19.312815 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a57ae3_3984_406d_b3f4_a4c226234382.slice/crio-58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c WatchSource:0}: Error finding container 58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c: Status 404 returned error can't find the container with id 58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.396688 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.404696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.463627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.084205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerStarted","Data":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.086858 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"35be71e2c35fded3288cd100d8af21765ea2dd1c1f28ab6ae6f19e3bd820524b"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.087914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" event={"ID":"b1a57ae3-3984-406d-b3f4-a4c226234382","Type":"ContainerStarted","Data":"58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.351444 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.354200 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-847cdd58c-slpz9" podStartSLOduration=2.35417684 podStartE2EDuration="2.35417684s" podCreationTimestamp="2026-02-17 16:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:02:20.347255668 +0000 UTC m=+452.764273666" watchObservedRunningTime="2026-02-17 16:02:20.35417684 +0000 UTC m=+452.771194828" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.423169 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.425096 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429531 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429530 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429946 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429819 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430760 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430811 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430776 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430954 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.431087 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-f4i6b27l8t32" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.434567 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.434858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-4r2hf" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.438030 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.455637 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512228 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512247 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512294 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512319 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512539 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512620 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512645 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512668 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512722 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512752 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512789 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.513013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614785 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614839 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614950 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614987 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615005 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615153 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615192 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615211 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615228 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615244 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.616506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.616795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.620942 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.621412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.622097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.623607 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.623895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.624122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.624368 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.625307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.625407 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.628808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.629914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.634924 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.637803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.637851 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.640494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.644881 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.761640 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:21 crc kubenswrapper[4829]: I0217 16:02:21.104867 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" event={"ID":"211288e8-fde3-46bb-99ee-46749e19112a","Type":"ContainerStarted","Data":"39e71a9ea17669c833f90c25cdc68a462caa52e2e6e5b3b06aab4d32f4b719f2"} Feb 17 16:02:21 crc kubenswrapper[4829]: I0217 16:02:21.784960 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.113920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"bcedfe4dd7d684dfd2615edaeb615a5d7fac07977499c79e3e153a541943d634"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114557 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"8fe1c29cc340dae45e8ecfa05205bde4738620c7c0536f3f5ce9c1e0d7173d6c"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114646 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114666 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"fbe0004479cd1f6f0c8bf879a286aa3242234a6eee3233f2f69b53385237ea61"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.117805 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"f3523d8fe3c805b586550c700a868eee49125e80932010b843383f496fe72419"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118121 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"aa34e3b50980dd1f90989d4ceee4bf62df376386a2feb13487028480533552e0"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118228 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"33ebcfc502784b4dd5372cf4a2f474ae88104cfb490bced4e208f755865122ec"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118250 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"6120f4986ef69dd47cd4bcf3a1ca1de2e1dfdd2b23cb22814581233e336a28b7"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120249 4829 generic.go:334] "Generic (PLEG): container finished" podID="a265a122-2cfe-440c-bf5a-881b4144381d" containerID="0782dc4434f3d1e0a5210a185283fae0b51d1016aa679d5509138a4fa3406164" exitCode=0 Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120290 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerDied","Data":"0782dc4434f3d1e0a5210a185283fae0b51d1016aa679d5509138a4fa3406164"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120314 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"76ee5173001ec2023f4e2a7fc75fe3110b0d771da3686de8a16f837c89445cd7"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.123078 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.141257 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" podStartSLOduration=2.155389573 podStartE2EDuration="7.141219747s" podCreationTimestamp="2026-02-17 16:02:15 +0000 UTC" firstStartedPulling="2026-02-17 16:02:16.381851296 +0000 UTC m=+448.798869274" lastFinishedPulling="2026-02-17 16:02:21.36768147 +0000 UTC m=+453.784699448" observedRunningTime="2026-02-17 16:02:22.140294591 +0000 UTC m=+454.557312579" watchObservedRunningTime="2026-02-17 16:02:22.141219747 +0000 UTC m=+454.558237725" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.128751 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" event={"ID":"211288e8-fde3-46bb-99ee-46749e19112a","Type":"ContainerStarted","Data":"30d3dbf407a8ec4ea029ebdfe4eb064a03fe839804ff15c0263be170a6102483"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.129233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136765 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136811 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"f0515a9fa8de8362c9dc0421cf5cef0144cef9ee713a8539a0d492332136e0cb"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136838 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"bf2b980f826c0aa4ea0b10dd4cad63ee3aa66053375dbe519125062c9bef0e38"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.139603 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" event={"ID":"b1a57ae3-3984-406d-b3f4-a4c226234382","Type":"ContainerStarted","Data":"2a6939912041c5d0fcee4ebd5a43630c4e8c02b1305f160b3f8ebb4b64b01f74"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.153325 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" podStartSLOduration=2.53813861 podStartE2EDuration="4.153298736s" podCreationTimestamp="2026-02-17 16:02:19 +0000 UTC" firstStartedPulling="2026-02-17 16:02:21.070527764 +0000 UTC m=+453.487545742" lastFinishedPulling="2026-02-17 16:02:22.68568789 +0000 UTC m=+455.102705868" observedRunningTime="2026-02-17 16:02:23.151170315 +0000 UTC m=+455.568188293" watchObservedRunningTime="2026-02-17 16:02:23.153298736 +0000 UTC m=+455.570316714" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.158252 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.182898 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.406306095 podStartE2EDuration="9.182878157s" podCreationTimestamp="2026-02-17 16:02:14 +0000 UTC" firstStartedPulling="2026-02-17 16:02:15.592026865 +0000 UTC m=+448.009044843" lastFinishedPulling="2026-02-17 16:02:21.368598927 +0000 UTC m=+453.785616905" observedRunningTime="2026-02-17 16:02:23.178062727 +0000 UTC m=+455.595080705" watchObservedRunningTime="2026-02-17 16:02:23.182878157 +0000 UTC m=+455.599896135" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.228332 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" podStartSLOduration=1.866099448 podStartE2EDuration="5.228313459s" podCreationTimestamp="2026-02-17 16:02:18 +0000 UTC" firstStartedPulling="2026-02-17 16:02:19.317179456 +0000 UTC m=+451.734197434" lastFinishedPulling="2026-02-17 16:02:22.679393467 +0000 UTC m=+455.096411445" observedRunningTime="2026-02-17 16:02:23.209823192 +0000 UTC m=+455.626841170" watchObservedRunningTime="2026-02-17 16:02:23.228313459 +0000 UTC m=+455.645331437" Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.159946 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"8eaad2b8829c9f518ec03453a920606d566191cd710a47b003dfc5d0a48eca77"} Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.160449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"b4902392ae9e9faddbbaaf51c72a9490f48305b944f65a991fcc6e0497512878"} Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.160462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"3ec330fc97ce84a7db0f6e465a1250c1dec7d059b774b5ce7b3c091d402ec3cf"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"af44c0435d95f9f06200cc1ef71b94fac11efd1c984df9938c5dac85acdd2e2c"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"dd5d6fa06f86e1582cc0f51c47a81ddbe84b4e0b6b0d3852faad86cebff02590"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"1b007719f1f543d9b5475072dd81547d29c3ec96cb5f1a09119fe58fc39bd0c3"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.199178 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.75717947 podStartE2EDuration="7.199160459s" podCreationTimestamp="2026-02-17 16:02:20 +0000 UTC" firstStartedPulling="2026-02-17 16:02:22.12274496 +0000 UTC m=+454.539762948" lastFinishedPulling="2026-02-17 16:02:25.564725939 +0000 UTC m=+457.981743937" observedRunningTime="2026-02-17 16:02:27.196474327 +0000 UTC m=+459.613492315" watchObservedRunningTime="2026-02-17 16:02:27.199160459 +0000 UTC m=+459.616178437" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.505684 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.506150 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.514338 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:29 crc kubenswrapper[4829]: I0217 16:02:29.192723 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:29 crc kubenswrapper[4829]: I0217 16:02:29.321143 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:30 crc kubenswrapper[4829]: I0217 16:02:30.763622 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:38 crc kubenswrapper[4829]: I0217 16:02:38.886700 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:38 crc kubenswrapper[4829]: I0217 16:02:38.887402 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:54 crc kubenswrapper[4829]: I0217 16:02:54.384611 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9fgb2" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" containerID="cri-o://054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" gracePeriod=15 Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385513 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385817 4829 generic.go:334] "Generic (PLEG): container finished" podID="96919462-7665-4b8f-8a8a-7c865d29393f" containerID="054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" exitCode=2 Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerDied","Data":"054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587"} Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.489277 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.489385 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.536834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.536883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537007 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537039 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537113 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537198 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537257 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538236 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca" (OuterVolumeSpecName: "service-ca") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538404 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538423 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config" (OuterVolumeSpecName: "console-config") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.544985 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.547986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.552895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6" (OuterVolumeSpecName: "kube-api-access-99rq6") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "kube-api-access-99rq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.639864 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640935 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640983 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640995 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641009 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641022 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641035 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393794 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393873 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerDied","Data":"a4dd5884310a79cb7487b5f3cbe05eafb8d2a2c5440edad3ee0322f1cc8a15db"} Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393912 4829 scope.go:117] "RemoveContainer" containerID="054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.394038 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.438565 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.443765 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.314988 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" path="/var/lib/kubelet/pods/96919462-7665-4b8f-8a8a-7c865d29393f/volumes" Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.897299 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.911970 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:03:20 crc kubenswrapper[4829]: I0217 16:03:20.763852 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:20 crc kubenswrapper[4829]: I0217 16:03:20.806174 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:21 crc kubenswrapper[4829]: I0217 16:03:21.627705 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.225894 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:38 crc kubenswrapper[4829]: E0217 16:03:38.227100 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.227124 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.227303 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.228002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.244655 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335801 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335820 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335840 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.336034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.437790 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.437897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438180 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438296 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438699 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.439444 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.439707 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.440411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.445279 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.445769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.460025 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.552209 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.777389 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.735051 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerStarted","Data":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.735422 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerStarted","Data":"bfae83dcdb0a183b25666f792e4baf03784ae0581990e298c8186a70a2bee65f"} Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.773895 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-797db4bf78-znlsn" podStartSLOduration=1.773862941 podStartE2EDuration="1.773862941s" podCreationTimestamp="2026-02-17 16:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:03:39.762560206 +0000 UTC m=+532.179578274" watchObservedRunningTime="2026-02-17 16:03:39.773862941 +0000 UTC m=+532.190880959" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.552795 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.555806 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.562941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.818562 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.902863 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:03:52 crc kubenswrapper[4829]: I0217 16:03:52.424454 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:03:52 crc kubenswrapper[4829]: I0217 16:03:52.424810 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:13 crc kubenswrapper[4829]: I0217 16:04:13.973959 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-847cdd58c-slpz9" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" containerID="cri-o://f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" gracePeriod=15 Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.414778 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-847cdd58c-slpz9_7b2f8413-6a54-4bef-a63e-f2b278f57a6d/console/0.log" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.415218 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543278 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543404 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543441 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543517 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543610 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543658 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543691 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544676 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544704 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config" (OuterVolumeSpecName: "console-config") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544759 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca" (OuterVolumeSpecName: "service-ca") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544824 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.550249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj" (OuterVolumeSpecName: "kube-api-access-dnhjj") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "kube-api-access-dnhjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.552910 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.553796 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.644959 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.644992 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645001 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645010 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645018 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645026 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645034 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045419 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-847cdd58c-slpz9_7b2f8413-6a54-4bef-a63e-f2b278f57a6d/console/0.log" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045512 4829 generic.go:334] "Generic (PLEG): container finished" podID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" exitCode=2 Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045566 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerDied","Data":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerDied","Data":"bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd"} Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045716 4829 scope.go:117] "RemoveContainer" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045747 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.082632 4829 scope.go:117] "RemoveContainer" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: E0217 16:04:15.083233 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": container with ID starting with f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c not found: ID does not exist" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.083307 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} err="failed to get container status \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": rpc error: code = NotFound desc = could not find container \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": container with ID starting with f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c not found: ID does not exist" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.122239 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.136323 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:04:16 crc kubenswrapper[4829]: I0217 16:04:16.295027 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" path="/var/lib/kubelet/pods/7b2f8413-6a54-4bef-a63e-f2b278f57a6d/volumes" Feb 17 16:04:22 crc kubenswrapper[4829]: I0217 16:04:22.425315 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:04:22 crc kubenswrapper[4829]: I0217 16:04:22.426007 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.425256 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.425975 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426041 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426794 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426892 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" gracePeriod=600 Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349056 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" exitCode=0 Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349133 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349962 4829 scope.go:117] "RemoveContainer" containerID="82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.882535 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:13 crc kubenswrapper[4829]: E0217 16:06:13.883708 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.883733 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.883967 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.885462 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.894818 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.896370 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943395 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943459 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044738 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044868 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044925 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.045763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.045797 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.072700 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.205288 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.496436 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:15 crc kubenswrapper[4829]: I0217 16:06:15.024757 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerStarted","Data":"dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0"} Feb 17 16:06:15 crc kubenswrapper[4829]: I0217 16:06:15.025145 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerStarted","Data":"eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384"} Feb 17 16:06:16 crc kubenswrapper[4829]: I0217 16:06:16.031403 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0" exitCode=0 Feb 17 16:06:16 crc kubenswrapper[4829]: I0217 16:06:16.031441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0"} Feb 17 16:06:18 crc kubenswrapper[4829]: I0217 16:06:18.056983 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="3b44369213f31f496419e5b7daa056d8091242c791a342d2f9f9c30abd0445e8" exitCode=0 Feb 17 16:06:18 crc kubenswrapper[4829]: I0217 16:06:18.057074 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"3b44369213f31f496419e5b7daa056d8091242c791a342d2f9f9c30abd0445e8"} Feb 17 16:06:19 crc kubenswrapper[4829]: I0217 16:06:19.066185 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="829c2e7f2c989ba6ce504343e24bc2ccb57c7281d5dbce073b8332223ef12d4a" exitCode=0 Feb 17 16:06:19 crc kubenswrapper[4829]: I0217 16:06:19.066293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"829c2e7f2c989ba6ce504343e24bc2ccb57c7281d5dbce073b8332223ef12d4a"} Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.367623 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.438838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.438924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.439053 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.441451 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle" (OuterVolumeSpecName: "bundle") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.451001 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util" (OuterVolumeSpecName: "util") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.461900 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82" (OuterVolumeSpecName: "kube-api-access-vgv82") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "kube-api-access-vgv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.540629 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.540960 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.541090 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083336 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384"} Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083748 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384" Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083431 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.829623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830416 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830432 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830460 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="util" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830468 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="util" Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830480 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="pull" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830489 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="pull" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.831137 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.831656 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.834752 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.835043 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.835662 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-sg987" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.847220 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.899661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.949643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.950370 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.951946 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nks7v" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.952475 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.962845 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.963649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.973149 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.979172 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001324 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001393 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.045898 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102758 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102812 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102862 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106919 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106997 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.119968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.163725 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.163898 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.164850 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.166908 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.168020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-8gbgz" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.204274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.204337 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.231392 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.267500 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.281386 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.307430 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.307491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.321442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.355211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.395692 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.396662 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.398532 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-msgzl" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.409651 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.410118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.410173 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516423 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516603 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.529203 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.545606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.674161 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.725013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.734409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.803188 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.852320 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: W0217 16:06:32.858332 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3431d3_b6f2_4658_b45c_c428b77e98df.slice/crio-93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3 WatchSource:0}: Error finding container 93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3: Status 404 returned error can't find the container with id 93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3 Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.942380 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: W0217 16:06:32.946147 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd120281_015e_45a4_b1ae_f868b2326499.slice/crio-d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033 WatchSource:0}: Error finding container d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033: Status 404 returned error can't find the container with id d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033 Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.154256 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" event={"ID":"54e12496-0dd9-43a5-accb-e17546b7b715","Type":"ContainerStarted","Data":"078b55e10f34b0421d9bb8c7a46bff6a31903748728fe58c08c6ebdda7a7aec9"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.155732 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" event={"ID":"dd120281-015e-45a4-b1ae-f868b2326499","Type":"ContainerStarted","Data":"d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.157309 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" event={"ID":"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8","Type":"ContainerStarted","Data":"9ce2b012b069c341f7a7901979a72c3602939b601fcb719b9088dbe5fc844951"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.158596 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" event={"ID":"9d3431d3-b6f2-4658-b45c-c428b77e98df","Type":"ContainerStarted","Data":"93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.160349 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" event={"ID":"edb49e50-f230-48c5-b2e5-fe59a3ae73fa","Type":"ContainerStarted","Data":"eac20a92dfcfdbc66e320fa2aa5349b93ab0d093380c1bbd953b52ddfbd9e887"} Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.461015 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.469091 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" containerID="cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.469991 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" containerID="cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470508 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" containerID="cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470557 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" containerID="cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470643 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" containerID="cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470690 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" containerID="cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470724 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.524457 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" containerID="cri-o://eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" gracePeriod=30 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.226147 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.228657 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229167 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229448 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229477 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229487 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229496 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229505 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" exitCode=143 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229514 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" exitCode=143 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229551 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229592 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229628 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229643 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232455 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232787 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232814 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" exitCode=2 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232832 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.233276 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:06:41 crc kubenswrapper[4829]: E0217 16:06:41.233537 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.249634 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250048 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250565 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" exitCode=0 Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250611 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" exitCode=0 Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250632 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a"} Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250656 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906"} Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.256919 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.257864 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.258428 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.258460 4829 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.372794 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512038 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512530 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512969 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592524 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqwqs"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592743 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592755 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592764 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592770 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592777 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592785 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592795 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592800 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592810 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592817 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592824 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kubecfg-setup" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592830 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kubecfg-setup" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592839 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592844 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592853 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592858 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592868 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592874 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592884 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592889 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592899 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592905 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592915 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592920 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593011 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593023 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593029 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593041 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593049 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593056 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593063 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593071 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593078 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593085 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.593173 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593179 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593277 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.594923 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602733 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602790 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602818 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602886 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602928 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602951 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602981 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603001 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603026 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603038 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603053 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603097 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603098 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603142 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603159 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603175 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603250 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603280 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603390 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603471 4829 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603484 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603493 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603501 4829 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603818 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log" (OuterVolumeSpecName: "node-log") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604872 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash" (OuterVolumeSpecName: "host-slash") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604894 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604919 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605208 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605283 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605317 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket" (OuterVolumeSpecName: "log-socket") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605367 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.617224 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.617912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8" (OuterVolumeSpecName: "kube-api-access-tbqk8") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "kube-api-access-tbqk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.628961 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.704894 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705191 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705269 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705343 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705366 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705422 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705457 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705477 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705510 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705530 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705551 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705602 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705613 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705622 4829 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705630 4829 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705638 4829 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705646 4829 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705655 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705663 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705671 4829 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705680 4829 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705687 4829 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705695 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705703 4829 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705711 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705719 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705738 4829 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807113 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807175 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807195 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807221 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807343 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807296 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807470 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807493 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807629 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807667 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807837 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807856 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807901 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807860 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807943 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808069 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.815192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.831712 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.908379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.270644 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" event={"ID":"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8","Type":"ContainerStarted","Data":"991c2b44469b5bcb14e456f6cf46e9e2d49468461be7ee6d7bb5561de2fbfd18"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.272906 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.274178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" event={"ID":"9d3431d3-b6f2-4658-b45c-c428b77e98df","Type":"ContainerStarted","Data":"3e3add12b9755ba83c31f6e709eac8c433f3a9d98ad67548f3a8233b50097f31"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.275468 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.276920 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.277138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" event={"ID":"edb49e50-f230-48c5-b2e5-fe59a3ae73fa","Type":"ContainerStarted","Data":"e4c4e834ef0b512da93ec7bfdec8d4cf293811857e0539c1e67503bf6fadb078"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.282203 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283062 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283848 4829 scope.go:117] "RemoveContainer" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.284066 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.299366 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" event={"ID":"54e12496-0dd9-43a5-accb-e17546b7b715","Type":"ContainerStarted","Data":"08a2b1c068659d94358546c700431d82b1043a4c29696ba3e5bf716c7d527abe"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.301320 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" event={"ID":"dd120281-015e-45a4-b1ae-f868b2326499","Type":"ContainerStarted","Data":"770d17b85d06ec85ba48c749bf75d8f4cae79d4912c88d4b379bfb2dc96cb041"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.301926 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309357 4829 generic.go:334] "Generic (PLEG): container finished" podID="cc41a532-4c37-401e-b0f0-7a9a0561c2e2" containerID="9b7d1b0a6d48da78994667522e51713fca0cf71d5805e72d8583c4e1896889eb" exitCode=0 Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309399 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerDied","Data":"9b7d1b0a6d48da78994667522e51713fca0cf71d5805e72d8583c4e1896889eb"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309424 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"8542198e8b90f9ae5798217628d88c91623dd3376cd976c3ed467635691ddfea"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.326342 4829 scope.go:117] "RemoveContainer" containerID="d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.340507 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" podStartSLOduration=2.771807485 podStartE2EDuration="14.340493128s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.761417389 +0000 UTC m=+705.178435367" lastFinishedPulling="2026-02-17 16:06:44.330103002 +0000 UTC m=+716.747121010" observedRunningTime="2026-02-17 16:06:45.299461405 +0000 UTC m=+717.716479393" watchObservedRunningTime="2026-02-17 16:06:45.340493128 +0000 UTC m=+717.757511106" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.356153 4829 scope.go:117] "RemoveContainer" containerID="f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.365163 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" podStartSLOduration=2.732809726 podStartE2EDuration="14.365148669s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.70227081 +0000 UTC m=+705.119288788" lastFinishedPulling="2026-02-17 16:06:44.334609733 +0000 UTC m=+716.751627731" observedRunningTime="2026-02-17 16:06:45.341415192 +0000 UTC m=+717.758433170" watchObservedRunningTime="2026-02-17 16:06:45.365148669 +0000 UTC m=+717.782166647" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.366360 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" podStartSLOduration=1.824645285 podStartE2EDuration="13.366354432s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.861686232 +0000 UTC m=+705.278704220" lastFinishedPulling="2026-02-17 16:06:44.403395389 +0000 UTC m=+716.820413367" observedRunningTime="2026-02-17 16:06:45.364506952 +0000 UTC m=+717.781524930" watchObservedRunningTime="2026-02-17 16:06:45.366354432 +0000 UTC m=+717.783372410" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.385795 4829 scope.go:117] "RemoveContainer" containerID="6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.416001 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" podStartSLOduration=2.898269771 podStartE2EDuration="14.415983555s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.813629141 +0000 UTC m=+705.230647119" lastFinishedPulling="2026-02-17 16:06:44.331342905 +0000 UTC m=+716.748360903" observedRunningTime="2026-02-17 16:06:45.394465446 +0000 UTC m=+717.811483424" watchObservedRunningTime="2026-02-17 16:06:45.415983555 +0000 UTC m=+717.833001533" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.419242 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" podStartSLOduration=1.967421652 podStartE2EDuration="13.419234851s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.948509778 +0000 UTC m=+705.365527756" lastFinishedPulling="2026-02-17 16:06:44.400322947 +0000 UTC m=+716.817340955" observedRunningTime="2026-02-17 16:06:45.417763492 +0000 UTC m=+717.834781470" watchObservedRunningTime="2026-02-17 16:06:45.419234851 +0000 UTC m=+717.836252829" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.419772 4829 scope.go:117] "RemoveContainer" containerID="41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.442465 4829 scope.go:117] "RemoveContainer" containerID="0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.459981 4829 scope.go:117] "RemoveContainer" containerID="bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.471551 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.475550 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.486942 4829 scope.go:117] "RemoveContainer" containerID="023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.504202 4829 scope.go:117] "RemoveContainer" containerID="562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12" Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.292218 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad9f982-deda-446c-8801-dc47104eee62" path="/var/lib/kubelet/pods/fad9f982-deda-446c-8801-dc47104eee62/volumes" Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"2933b4b2a67f0926a4b76845ddfccb6bf3be42388e49f3149c39d974d79139b4"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"8d8c324e5545ecdb1cc09ba574f13e01a6aa0d5e4437af370035aa9c359e47ba"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"fede21686976fe43f1c05763c4613aca57a319ba2e3136c771d5046fa3406dc3"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"931f244b081ab0711d7116ba493110ab103b7a5985e891f5a2c5124005fc8b1c"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"f8683f4d44df22585fb5bff9a5c7f727b2e3d88a992da739341edfb5b0a5505c"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"666effb79e14960df18e0db22fb60aefafa035131d995e6009543129c45dd79a"} Feb 17 16:06:48 crc kubenswrapper[4829]: I0217 16:06:48.330232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"7f92fde773d72d3a44606cfba5a805e9a25ca7e2e6c4bb537adb93aa8860137e"} Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.389933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"fc5d8228d7c9c17201b7ac8435917189bdf200bcb184e9e854ce9202b731b25b"} Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.390826 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.390944 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.391026 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.418636 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" podStartSLOduration=7.418617885 podStartE2EDuration="7.418617885s" podCreationTimestamp="2026-02-17 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:51.416623331 +0000 UTC m=+723.833641309" watchObservedRunningTime="2026-02-17 16:06:51.418617885 +0000 UTC m=+723.835635863" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.425294 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.427117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.425111 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.425469 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.577300 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.578003 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.586041 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.586760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587322 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hzdpq" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587451 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587535 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.600691 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pm9m5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.613602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.618178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.628910 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.630246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.633224 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.634471 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96c9z" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.716236 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.716320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.729894 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817328 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817459 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.852547 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.853098 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.900622 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.907710 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.918330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.934914 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.934969 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.935005 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.935050 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.936934 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.937982 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938023 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938043 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938085 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mf5jl" podUID="476f8c4d-b180-40c8-b5a7-120565b0789f" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.949707 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971768 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971826 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971852 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.279351 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.279682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401521 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401550 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401661 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402065 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402132 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482055 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482160 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482247 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493289 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493363 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493394 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493447 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509266 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509353 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509380 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509420 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mf5jl" podUID="476f8c4d-b180-40c8-b5a7-120565b0789f" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.279840 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.280200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329709 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329769 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329793 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329846 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.279285 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.280377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320065 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320119 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320141 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.510324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.510405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"aa56853b3602137d47ca0ceae3dde453e9a6fb88133dbeed0156c70be560f295"} Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.279222 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.282636 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.761804 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:07:08 crc kubenswrapper[4829]: W0217 16:07:08.770286 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod476f8c4d_b180_40c8_b5a7_120565b0789f.slice/crio-aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6 WatchSource:0}: Error finding container aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6: Status 404 returned error can't find the container with id aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6 Feb 17 16:07:09 crc kubenswrapper[4829]: I0217 16:07:09.534282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mf5jl" event={"ID":"476f8c4d-b180-40c8-b5a7-120565b0789f","Type":"ContainerStarted","Data":"aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6"} Feb 17 16:07:12 crc kubenswrapper[4829]: I0217 16:07:12.576179 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mf5jl" event={"ID":"476f8c4d-b180-40c8-b5a7-120565b0789f","Type":"ContainerStarted","Data":"3d364fd6c9a540e6fd7527ed8aede93c02efce3014ec5d5ad823e6323548e75f"} Feb 17 16:07:12 crc kubenswrapper[4829]: I0217 16:07:12.611683 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mf5jl" podStartSLOduration=17.810412364 podStartE2EDuration="20.611646809s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:08.772169902 +0000 UTC m=+741.189187890" lastFinishedPulling="2026-02-17 16:07:11.573404317 +0000 UTC m=+743.990422335" observedRunningTime="2026-02-17 16:07:12.600031098 +0000 UTC m=+745.017049116" watchObservedRunningTime="2026-02-17 16:07:12.611646809 +0000 UTC m=+745.028664827" Feb 17 16:07:14 crc kubenswrapper[4829]: I0217 16:07:14.949293 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.280978 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.282313 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.933352 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:07:17 crc kubenswrapper[4829]: W0217 16:07:17.944397 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc500c7f_2cf7_447f_ae9e_f22211c1d4ad.slice/crio-1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb WatchSource:0}: Error finding container 1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb: Status 404 returned error can't find the container with id 1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb Feb 17 16:07:18 crc kubenswrapper[4829]: I0217 16:07:18.624664 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" event={"ID":"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad","Type":"ContainerStarted","Data":"1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb"} Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.279088 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.280267 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.898553 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:07:19 crc kubenswrapper[4829]: W0217 16:07:19.905194 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90365502_e574_4c31_b97b_ca69aac75648.slice/crio-a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3 WatchSource:0}: Error finding container a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3: Status 404 returned error can't find the container with id a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3 Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.646607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" event={"ID":"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad","Type":"ContainerStarted","Data":"f8cc6aa588d9e36a57087bba44fd8090b84e3ed8ed53846188cd7138fc3fa49f"} Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.646934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.648031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" event={"ID":"90365502-e574-4c31-b97b-ca69aac75648","Type":"ContainerStarted","Data":"a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3"} Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.673258 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podStartSLOduration=26.949873111 podStartE2EDuration="28.673234058s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:17.948805755 +0000 UTC m=+750.365823773" lastFinishedPulling="2026-02-17 16:07:19.672166742 +0000 UTC m=+752.089184720" observedRunningTime="2026-02-17 16:07:20.667647978 +0000 UTC m=+753.084665956" watchObservedRunningTime="2026-02-17 16:07:20.673234058 +0000 UTC m=+753.090252046" Feb 17 16:07:21 crc kubenswrapper[4829]: I0217 16:07:21.658767 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" event={"ID":"90365502-e574-4c31-b97b-ca69aac75648","Type":"ContainerStarted","Data":"436d578c65f80a3ec7cb12d6b5f155d2c05a8f7c1bdfce7fb5151b5ec7f7617b"} Feb 17 16:07:21 crc kubenswrapper[4829]: I0217 16:07:21.685052 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podStartSLOduration=28.307310293 podStartE2EDuration="29.68502722s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:19.907833939 +0000 UTC m=+752.324851927" lastFinishedPulling="2026-02-17 16:07:21.285550876 +0000 UTC m=+753.702568854" observedRunningTime="2026-02-17 16:07:21.679166063 +0000 UTC m=+754.096184081" watchObservedRunningTime="2026-02-17 16:07:21.68502722 +0000 UTC m=+754.102045238" Feb 17 16:07:22 crc kubenswrapper[4829]: I0217 16:07:22.425236 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:07:22 crc kubenswrapper[4829]: I0217 16:07:22.425325 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:07:24 crc kubenswrapper[4829]: I0217 16:07:24.722999 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:07:27 crc kubenswrapper[4829]: I0217 16:07:27.953337 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.184355 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.192075 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.204490 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277297 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379543 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.380244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.380807 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.408682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.549031 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.031532 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:45 crc kubenswrapper[4829]: W0217 16:07:45.039775 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba WatchSource:0}: Error finding container c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba: Status 404 returned error can't find the container with id c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862426 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" exitCode=0 Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862513 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390"} Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862555 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerStarted","Data":"c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba"} Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.864875 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:07:46 crc kubenswrapper[4829]: I0217 16:07:46.871498 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" exitCode=0 Feb 17 16:07:46 crc kubenswrapper[4829]: I0217 16:07:46.871816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a"} Feb 17 16:07:46 crc kubenswrapper[4829]: E0217 16:07:46.960175 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-conmon-b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:47 crc kubenswrapper[4829]: I0217 16:07:47.885718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerStarted","Data":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} Feb 17 16:07:47 crc kubenswrapper[4829]: I0217 16:07:47.905327 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pwbz6" podStartSLOduration=2.427836329 podStartE2EDuration="3.905302584s" podCreationTimestamp="2026-02-17 16:07:44 +0000 UTC" firstStartedPulling="2026-02-17 16:07:45.864653091 +0000 UTC m=+778.281671069" lastFinishedPulling="2026-02-17 16:07:47.342119306 +0000 UTC m=+779.759137324" observedRunningTime="2026-02-17 16:07:47.901417389 +0000 UTC m=+780.318435467" watchObservedRunningTime="2026-02-17 16:07:47.905302584 +0000 UTC m=+780.322320602" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.544616 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.546465 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.570026 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680375 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680421 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781547 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781712 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.782152 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.782220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.809058 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.869900 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.342737 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.913161 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" exitCode=0 Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.913200 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828"} Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.914326 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"747ea8fe9b8d0099815a9e67eb706998bb857d51b0eefecdf7d0c1e5e5268d24"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424411 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424920 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.426343 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.426490 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" gracePeriod=600 Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.928878 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934405 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" exitCode=0 Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934466 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934484 4829 scope.go:117] "RemoveContainer" containerID="eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" Feb 17 16:07:53 crc kubenswrapper[4829]: I0217 16:07:53.945599 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" exitCode=0 Feb 17 16:07:53 crc kubenswrapper[4829]: I0217 16:07:53.945676 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.550258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.550809 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.580761 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.581905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.583458 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.605138 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.630511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641738 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.742912 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.742988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743049 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743538 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743629 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.763880 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.785260 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.787010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.799522 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844269 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.903532 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.945821 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.945946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946852 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.965309 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.980835 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.997029 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhlg9" podStartSLOduration=2.57332609 podStartE2EDuration="4.997011285s" podCreationTimestamp="2026-02-17 16:07:50 +0000 UTC" firstStartedPulling="2026-02-17 16:07:51.915223836 +0000 UTC m=+784.332241814" lastFinishedPulling="2026-02-17 16:07:54.338909031 +0000 UTC m=+786.755927009" observedRunningTime="2026-02-17 16:07:54.995962824 +0000 UTC m=+787.412980812" watchObservedRunningTime="2026-02-17 16:07:54.997011285 +0000 UTC m=+787.414029263" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.031437 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.112121 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.366389 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.567842 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986131 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="500e93f756bbd9dce2c1f230bbf359410a2ab2cb5aef71a9d300ea9b7abaf7a0" exitCode=0 Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986215 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"500e93f756bbd9dce2c1f230bbf359410a2ab2cb5aef71a9d300ea9b7abaf7a0"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerStarted","Data":"dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.989263 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="9c281425d585c4c09d0ce6e1170686f431088e4723cc45cf5b532ef15c09aa65" exitCode=0 Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.990168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"9c281425d585c4c09d0ce6e1170686f431088e4723cc45cf5b532ef15c09aa65"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.990230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerStarted","Data":"8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7"} Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.004728 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="504f76f252aec139780ab0b0ab9e059fdf322750f3db1ce2bbd16fe4ade1509d" exitCode=0 Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.004799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"504f76f252aec139780ab0b0ab9e059fdf322750f3db1ce2bbd16fe4ade1509d"} Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.007039 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="4f311cee486863896f0d0b561244b9e78487341a09d2e005b828973516f9eccd" exitCode=0 Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.007075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"4f311cee486863896f0d0b561244b9e78487341a09d2e005b828973516f9eccd"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.028522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"0da3d4ed97185dc0b4579d3d6a08b9bef01d516df9feba317d0a4cec41ef831f"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.028479 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="0da3d4ed97185dc0b4579d3d6a08b9bef01d516df9feba317d0a4cec41ef831f" exitCode=0 Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.038095 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="483ea08a6d40128fb85cce6a45b7d0089e6572f8293d7ba9fd96f371ecf39af4" exitCode=0 Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.038169 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"483ea08a6d40128fb85cce6a45b7d0089e6572f8293d7ba9fd96f371ecf39af4"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.539164 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.539498 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pwbz6" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" containerID="cri-o://60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" gracePeriod=2 Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.464416 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.466098 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571460 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571593 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571653 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571692 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571730 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.572665 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle" (OuterVolumeSpecName: "bundle") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.573528 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle" (OuterVolumeSpecName: "bundle") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.579757 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x" (OuterVolumeSpecName: "kube-api-access-2ll8x") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "kube-api-access-2ll8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.580886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx" (OuterVolumeSpecName: "kube-api-access-jm9dx") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "kube-api-access-jm9dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.584533 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util" (OuterVolumeSpecName: "util") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.593042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util" (OuterVolumeSpecName: "util") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.671859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672865 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672885 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672894 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672907 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672915 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672923 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.773499 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.774798 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.774996 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.776487 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities" (OuterVolumeSpecName: "utilities") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.782562 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6" (OuterVolumeSpecName: "kube-api-access-mcdj6") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "kube-api-access-mcdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.811824 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.870144 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.870265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877553 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877600 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877615 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056599 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056634 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056642 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058486 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" exitCode=0 Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058590 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058614 4829 scope.go:117] "RemoveContainer" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058738 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063687 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063793 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.082404 4829 scope.go:117] "RemoveContainer" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.092204 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.096717 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.115137 4829 scope.go:117] "RemoveContainer" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.139967 4829 scope.go:117] "RemoveContainer" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.140505 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": container with ID starting with 60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4 not found: ID does not exist" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.140559 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} err="failed to get container status \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": rpc error: code = NotFound desc = could not find container \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": container with ID starting with 60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4 not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.140608 4829 scope.go:117] "RemoveContainer" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.141002 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": container with ID starting with b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a not found: ID does not exist" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141026 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a"} err="failed to get container status \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": rpc error: code = NotFound desc = could not find container \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": container with ID starting with b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141045 4829 scope.go:117] "RemoveContainer" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.141278 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": container with ID starting with aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390 not found: ID does not exist" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141301 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390"} err="failed to get container status \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": rpc error: code = NotFound desc = could not find container \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": container with ID starting with aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390 not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.920977 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhlg9" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" probeResult="failure" output=< Feb 17 16:08:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:08:01 crc kubenswrapper[4829]: > Feb 17 16:08:02 crc kubenswrapper[4829]: I0217 16:08:02.296012 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" path="/var/lib/kubelet/pods/c5962bde-d309-4dbe-b4ce-750af54dec5c/volumes" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144318 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144840 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144851 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144862 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-utilities" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144868 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-utilities" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144876 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-content" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144882 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-content" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144892 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144897 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144908 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144913 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144923 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144928 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144938 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144943 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144951 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144957 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144976 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145086 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145102 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145112 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.146082 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.163682 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403666 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.404353 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.404808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.448491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.461068 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.773753 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.126977 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843" exitCode=0 Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.127127 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843"} Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.127409 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"71392bb15fe30737dcc91e4557eb2e9ef23b12f6bed7911efc5cbd153b7360e4"} Feb 17 16:08:10 crc kubenswrapper[4829]: I0217 16:08:10.134340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab"} Feb 17 16:08:10 crc kubenswrapper[4829]: I0217 16:08:10.946750 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.026817 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.142759 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab" exitCode=0 Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.142839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab"} Feb 17 16:08:12 crc kubenswrapper[4829]: I0217 16:08:12.154219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5"} Feb 17 16:08:12 crc kubenswrapper[4829]: I0217 16:08:12.181995 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsjr9" podStartSLOduration=1.698732711 podStartE2EDuration="4.18197081s" podCreationTimestamp="2026-02-17 16:08:08 +0000 UTC" firstStartedPulling="2026-02-17 16:08:09.128665864 +0000 UTC m=+801.545683852" lastFinishedPulling="2026-02-17 16:08:11.611903963 +0000 UTC m=+804.028921951" observedRunningTime="2026-02-17 16:08:12.174792907 +0000 UTC m=+804.591810885" watchObservedRunningTime="2026-02-17 16:08:12.18197081 +0000 UTC m=+804.598988798" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.077755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.078953 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081280 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081699 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081747 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-246v8" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082552 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082776 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082915 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.098534 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169530 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169647 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169745 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271373 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271539 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.272169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.278008 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.278374 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.291359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.291862 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.397844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.527526 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.528538 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531023 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531186 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-ndsvz" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.536950 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.676279 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.777376 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.794861 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: W0217 16:08:13.845950 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd845044e_d849_405d_a6ef_c2d76a5abba6.slice/crio-c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8 WatchSource:0}: Error finding container c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8: Status 404 returned error can't find the container with id c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8 Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.845993 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.850722 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.046299 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:14 crc kubenswrapper[4829]: W0217 16:08:14.056737 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54232488_a26b_4bdf_8b89_381241b92b54.slice/crio-2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5 WatchSource:0}: Error finding container 2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5: Status 404 returned error can't find the container with id 2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5 Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.167105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8"} Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.168282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" event={"ID":"54232488-a26b-4bdf-8b89-381241b92b54","Type":"ContainerStarted","Data":"2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5"} Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.334825 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.335080 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhlg9" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" containerID="cri-o://57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" gracePeriod=2 Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.717894 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894374 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.897345 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities" (OuterVolumeSpecName: "utilities") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.917718 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk" (OuterVolumeSpecName: "kube-api-access-ms6rk") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "kube-api-access-ms6rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.998283 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.998314 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.028420 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.100446 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209319 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" exitCode=0 Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209389 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209379 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"747ea8fe9b8d0099815a9e67eb706998bb857d51b0eefecdf7d0c1e5e5268d24"} Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209483 4829 scope.go:117] "RemoveContainer" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.228182 4829 scope.go:117] "RemoveContainer" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.245078 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.249224 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.269470 4829 scope.go:117] "RemoveContainer" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310234 4829 scope.go:117] "RemoveContainer" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.310794 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": container with ID starting with 57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18 not found: ID does not exist" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310855 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} err="failed to get container status \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": rpc error: code = NotFound desc = could not find container \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": container with ID starting with 57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18 not found: ID does not exist" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310889 4829 scope.go:117] "RemoveContainer" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.311272 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": container with ID starting with 02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c not found: ID does not exist" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.311337 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} err="failed to get container status \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": rpc error: code = NotFound desc = could not find container \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": container with ID starting with 02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c not found: ID does not exist" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.311374 4829 scope.go:117] "RemoveContainer" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.313299 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": container with ID starting with d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828 not found: ID does not exist" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.313324 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828"} err="failed to get container status \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": rpc error: code = NotFound desc = could not find container \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": container with ID starting with d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828 not found: ID does not exist" Feb 17 16:08:16 crc kubenswrapper[4829]: I0217 16:08:16.291153 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" path="/var/lib/kubelet/pods/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc/volumes" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.461450 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.461729 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.504909 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:19 crc kubenswrapper[4829]: I0217 16:08:19.309151 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.268651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"ba72c41efe419b3422abc7bde3c04790e2e59a48d3430534b20b45fca82ff6b9"} Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.272482 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" event={"ID":"54232488-a26b-4bdf-8b89-381241b92b54","Type":"ContainerStarted","Data":"15b040fb3e7899376ade6063137f6935d3e43b40adbf5e55b1eed53dae4b925a"} Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.301903 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" podStartSLOduration=1.838106467 podStartE2EDuration="9.301878285s" podCreationTimestamp="2026-02-17 16:08:13 +0000 UTC" firstStartedPulling="2026-02-17 16:08:14.059259629 +0000 UTC m=+806.476277607" lastFinishedPulling="2026-02-17 16:08:21.523031437 +0000 UTC m=+813.940049425" observedRunningTime="2026-02-17 16:08:22.301054712 +0000 UTC m=+814.718072700" watchObservedRunningTime="2026-02-17 16:08:22.301878285 +0000 UTC m=+814.718896293" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.138031 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.138525 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tsjr9" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" containerID="cri-o://52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" gracePeriod=2 Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.283192 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" exitCode=0 Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.283264 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5"} Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.679953 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.862744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863132 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863182 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863911 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities" (OuterVolumeSpecName: "utilities") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.869565 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc" (OuterVolumeSpecName: "kube-api-access-25xzc") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "kube-api-access-25xzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.926452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964817 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964852 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964895 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.298708 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"71392bb15fe30737dcc91e4557eb2e9ef23b12f6bed7911efc5cbd153b7360e4"} Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.299680 4829 scope.go:117] "RemoveContainer" containerID="52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.298792 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.328682 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.332078 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.338989 4829 scope.go:117] "RemoveContainer" containerID="eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.361151 4829 scope.go:117] "RemoveContainer" containerID="87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843" Feb 17 16:08:26 crc kubenswrapper[4829]: I0217 16:08:26.291851 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" path="/var/lib/kubelet/pods/ca2bc313-c759-4b68-8a79-91cfb9059e60/volumes" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.359255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"39f4699e9f021d5f434136341eedaca0c0c1c1d7408ab84504a01535453bfcaa"} Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.359878 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.363352 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.402155 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" podStartSLOduration=1.906124768 podStartE2EDuration="17.402120706s" podCreationTimestamp="2026-02-17 16:08:13 +0000 UTC" firstStartedPulling="2026-02-17 16:08:13.849281409 +0000 UTC m=+806.266299387" lastFinishedPulling="2026-02-17 16:08:29.345277347 +0000 UTC m=+821.762295325" observedRunningTime="2026-02-17 16:08:30.392493774 +0000 UTC m=+822.809511822" watchObservedRunningTime="2026-02-17 16:08:30.402120706 +0000 UTC m=+822.819138724" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000001 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000619 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000638 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000659 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000669 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000692 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000703 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000736 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000746 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000760 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000768 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000780 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000788 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000930 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000953 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.001544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.003598 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.003648 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.006836 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.132096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.132634 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.234266 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.234467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.238241 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.238313 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78a1f1f59404e4c8f45632a04b4073b58fcf919b0e2b57c1f6ffde01f2db77fb/globalmount\"" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.260123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.277114 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.319540 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.757794 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:08:35 crc kubenswrapper[4829]: I0217 16:08:35.393690 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f947362f-df3e-462c-af01-d31c8e524633","Type":"ContainerStarted","Data":"a9919eaaf5ba6065bd7b230fbc8591757b05b256a18ae97cba58d18f27c588df"} Feb 17 16:08:38 crc kubenswrapper[4829]: I0217 16:08:38.415290 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f947362f-df3e-462c-af01-d31c8e524633","Type":"ContainerStarted","Data":"194a09ccdc4146f67ea826888bb30a1fba2326145655f42c49a864fa6b00f429"} Feb 17 16:08:38 crc kubenswrapper[4829]: I0217 16:08:38.432680 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.26252234 podStartE2EDuration="7.432665125s" podCreationTimestamp="2026-02-17 16:08:31 +0000 UTC" firstStartedPulling="2026-02-17 16:08:34.769110156 +0000 UTC m=+827.186128134" lastFinishedPulling="2026-02-17 16:08:37.939252911 +0000 UTC m=+830.356270919" observedRunningTime="2026-02-17 16:08:38.431146924 +0000 UTC m=+830.848164902" watchObservedRunningTime="2026-02-17 16:08:38.432665125 +0000 UTC m=+830.849683093" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.951097 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.954157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958160 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958258 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958670 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-bjxjt" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.959259 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.966695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.073871 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.073969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074028 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.101779 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.102676 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.104934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.105365 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.106383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.114272 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174957 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175043 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175585 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175685 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.176252 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.176529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.183512 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.191355 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.192422 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199732 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199801 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.209084 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.210311 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.260103 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.265462 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273312 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273464 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-rccgh" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273616 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273681 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273740 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.274189 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285129 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285232 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285292 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.289735 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.291037 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.293105 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.301097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.301282 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.316359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.339789 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341228 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341360 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341329 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387212 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387445 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387553 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387693 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.397415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.401631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.403119 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.422457 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.427251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.428502 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488674 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488980 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489018 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489033 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489178 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489256 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489280 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489307 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.490850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.491061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.491243 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.492211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.507618 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.507774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.510125 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.512382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.547107 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591417 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591565 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591601 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591619 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591635 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.593223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.597498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.598051 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.598301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.599408 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.601485 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.604946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.625027 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.659365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.663333 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: W0217 16:08:44.665844 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76340faf_b2e5_461e_9172_a03eee715830.slice/crio-51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4 WatchSource:0}: Error finding container 51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4: Status 404 returned error can't find the container with id 51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4 Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.680626 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.746126 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:44 crc kubenswrapper[4829]: W0217 16:08:44.769168 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e78e45a_c46f_4cfd_a487_56fad3cb0649.slice/crio-67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc WatchSource:0}: Error finding container 67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc: Status 404 returned error can't find the container with id 67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.066766 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.070715 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90856a62_8a7f_479c_af7e_a95b8292618a.slice/crio-03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362 WatchSource:0}: Error finding container 03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362: Status 404 returned error can't find the container with id 03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362 Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.079942 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.081004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.083043 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.083728 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.088623 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.139240 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.143520 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a2308f_5d3c_4dac_b105_3d42a6b7bdd1.slice/crio-45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47 WatchSource:0}: Error finding container 45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47: Status 404 returned error can't find the container with id 45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47 Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.150865 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.151944 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.153783 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.153882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.163810 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.185103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.201882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.201948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202043 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.209549 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52de54a3_9f80_412c_a925_25541914e2b0.slice/crio-1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a WatchSource:0}: Error finding container 1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a: Status 404 returned error can't find the container with id 1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.227637 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.228473 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.230219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.230406 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.241705 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.303936 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.303984 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304038 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304061 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304092 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304183 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304214 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304229 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305115 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305166 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305207 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305260 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305297 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305690 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.307916 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.307950 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e861a5096f5f0d1287f9a88513df974a6a9c92d5d1b4a4bae97166a7b3febbf7/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.310292 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.310380 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3a40b83791c7a77d3eb558f51ade9de37416943ff6cc471855c64f0b52b50f1/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.312735 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.313812 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.315767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.322395 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.338161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.341995 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406148 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406311 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406340 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406357 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406391 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.408136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.408977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410436 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410860 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.411024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.411224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.412140 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.412256 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.413989 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.414027 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f022bf64a59c1be903ef93f415580ba9af908757cb0725ae917d6880abb7ea9/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.415023 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.415772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.418507 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.418546 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b43741fcaf0f6728a264b5d8e8846f094e17347790ad69ae2ff64917e7ad50d4/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.428911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.431883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.466028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.466834 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" event={"ID":"90856a62-8a7f-479c-af7e-a95b8292618a","Type":"ContainerStarted","Data":"03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.468268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" event={"ID":"76340faf-b2e5-461e-9172-a03eee715830","Type":"ContainerStarted","Data":"51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.469203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.469508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.470151 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" event={"ID":"3e78e45a-c46f-4cfd-a487-56fad3cb0649","Type":"ContainerStarted","Data":"67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.477568 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.558213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.767334 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.971760 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.082163 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.206683 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: W0217 16:08:46.218956 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7dd4bfd_add5_4b6b_a938_5e8ae8433d10.slice/crio-460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd WatchSource:0}: Error finding container 460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd: Status 404 returned error can't find the container with id 460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.480153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a7c5b31c-f45c-4a04-afc1-251ef93e471a","Type":"ContainerStarted","Data":"b7c05feab7d9fbcd578a2ece1545d8ce879d457d9bb03dda0bbbf9a7e4d6dc25"} Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.481109 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10","Type":"ContainerStarted","Data":"460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd"} Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.481864 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7bf847ac-1d33-4bad-8882-4661d8f33da8","Type":"ContainerStarted","Data":"e87ee7b3b50d4607829cd4eae44e1099d6244218b95876dda8d89c2567638c5d"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.506241 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" event={"ID":"76340faf-b2e5-461e-9172-a03eee715830","Type":"ContainerStarted","Data":"811697a9b1ff759b6e30e692f2c95294982457094cc342cd770f03e257b912a8"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.506602 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.509902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a7c5b31c-f45c-4a04-afc1-251ef93e471a","Type":"ContainerStarted","Data":"270b53e2496f7577b11d1051da265f3a02f93a80c0f7d4954d147a6445e2144c"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.510139 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.513093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"98598ab6d4962b1587cf43b25e0655a4e20c8080617afb70d1b3b0b7ce2b163b"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.517283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" event={"ID":"3e78e45a-c46f-4cfd-a487-56fad3cb0649","Type":"ContainerStarted","Data":"71442ed0ebf28802dd3e6974191297917b0d9883339122decf98f1c28113a84e"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.517398 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.519718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"aca1dd42c199facbfa267d6584d4ded803b90be84ebfb21ab914da6b8fedea34"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.521925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" event={"ID":"90856a62-8a7f-479c-af7e-a95b8292618a","Type":"ContainerStarted","Data":"a06ca0982f53531643f81645359aef99245f632e7d1218b8f8dbcfd662282709"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.522111 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.524316 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10","Type":"ContainerStarted","Data":"abe53713cc41341bb80e28237643197db45973014c7bbe9bff1453219a49142f"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.524454 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.526362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7bf847ac-1d33-4bad-8882-4661d8f33da8","Type":"ContainerStarted","Data":"2f24fe25640262f273ce96f1c91bff695933d0e3a5cbea23562b81090aac3db3"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.526639 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.537874 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" podStartSLOduration=1.942560452 podStartE2EDuration="5.53785339s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:44.671273767 +0000 UTC m=+837.088291745" lastFinishedPulling="2026-02-17 16:08:48.266566705 +0000 UTC m=+840.683584683" observedRunningTime="2026-02-17 16:08:49.534025036 +0000 UTC m=+841.951043054" watchObservedRunningTime="2026-02-17 16:08:49.53785339 +0000 UTC m=+841.954871408" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.579740 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.292940693 podStartE2EDuration="5.579720386s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.980543692 +0000 UTC m=+838.397561660" lastFinishedPulling="2026-02-17 16:08:48.267323365 +0000 UTC m=+840.684341353" observedRunningTime="2026-02-17 16:08:49.572198511 +0000 UTC m=+841.989216499" watchObservedRunningTime="2026-02-17 16:08:49.579720386 +0000 UTC m=+841.996738374" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.601531 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" podStartSLOduration=3.113261803 podStartE2EDuration="6.601508976s" podCreationTimestamp="2026-02-17 16:08:43 +0000 UTC" firstStartedPulling="2026-02-17 16:08:44.772094962 +0000 UTC m=+837.189112940" lastFinishedPulling="2026-02-17 16:08:48.260342105 +0000 UTC m=+840.677360113" observedRunningTime="2026-02-17 16:08:49.599996465 +0000 UTC m=+842.017014473" watchObservedRunningTime="2026-02-17 16:08:49.601508976 +0000 UTC m=+842.018526964" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.636514 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.584196933 podStartE2EDuration="5.636475855s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:46.221226091 +0000 UTC m=+838.638244059" lastFinishedPulling="2026-02-17 16:08:48.273505003 +0000 UTC m=+840.690522981" observedRunningTime="2026-02-17 16:08:49.629244579 +0000 UTC m=+842.046262567" watchObservedRunningTime="2026-02-17 16:08:49.636475855 +0000 UTC m=+842.053493843" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.654963 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.471711832 podStartE2EDuration="5.654941706s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:46.0847771 +0000 UTC m=+838.501795078" lastFinishedPulling="2026-02-17 16:08:48.268006954 +0000 UTC m=+840.685024952" observedRunningTime="2026-02-17 16:08:49.6524979 +0000 UTC m=+842.069515888" watchObservedRunningTime="2026-02-17 16:08:49.654941706 +0000 UTC m=+842.071959684" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.681481 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" podStartSLOduration=2.5538390939999998 podStartE2EDuration="5.681454315s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.074040213 +0000 UTC m=+837.491058201" lastFinishedPulling="2026-02-17 16:08:48.201655444 +0000 UTC m=+840.618673422" observedRunningTime="2026-02-17 16:08:49.674733223 +0000 UTC m=+842.091751221" watchObservedRunningTime="2026-02-17 16:08:49.681454315 +0000 UTC m=+842.098472293" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.542972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"115bbf832da28ba6694e9713df6612e5c8a5717206df7fc0da8f43d7adb59986"} Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.543563 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.543662 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.544966 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"5c418dd2d77a5af464833ba222d0f29363a17df4c659fece282e9f95c09fa60b"} Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.545202 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.553498 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.560038 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.565110 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.578211 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" podStartSLOduration=2.205503596 podStartE2EDuration="7.578182657s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.145754598 +0000 UTC m=+837.562772576" lastFinishedPulling="2026-02-17 16:08:50.518433659 +0000 UTC m=+842.935451637" observedRunningTime="2026-02-17 16:08:51.567612211 +0000 UTC m=+843.984630199" watchObservedRunningTime="2026-02-17 16:08:51.578182657 +0000 UTC m=+843.995200675" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.597457 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" podStartSLOduration=2.313823273 podStartE2EDuration="7.597436489s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.211686376 +0000 UTC m=+837.628704354" lastFinishedPulling="2026-02-17 16:08:50.495299592 +0000 UTC m=+842.912317570" observedRunningTime="2026-02-17 16:08:51.592356771 +0000 UTC m=+844.009374789" watchObservedRunningTime="2026-02-17 16:08:51.597436489 +0000 UTC m=+844.014454467" Feb 17 16:08:52 crc kubenswrapper[4829]: I0217 16:08:52.554895 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:52 crc kubenswrapper[4829]: I0217 16:08:52.567912 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.293854 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.429888 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.556693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.420755 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.421301 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.567857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.778294 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:09:15 crc kubenswrapper[4829]: I0217 16:09:15.418390 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:09:15 crc kubenswrapper[4829]: I0217 16:09:15.420053 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:25 crc kubenswrapper[4829]: I0217 16:09:25.417023 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:09:25 crc kubenswrapper[4829]: I0217 16:09:25.417783 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:35 crc kubenswrapper[4829]: I0217 16:09:35.415255 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:09:35 crc kubenswrapper[4829]: I0217 16:09:35.415628 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:45 crc kubenswrapper[4829]: I0217 16:09:45.418862 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.751853 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.757074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.766337 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.887597 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.888052 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.888221 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990718 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.991262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.991299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.032805 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.079363 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.583379 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020875 4829 generic.go:334] "Generic (PLEG): container finished" podID="11288751-f708-4745-96fa-625be709d265" containerID="bc6744f09138f5aa87c11faadd70077d0a62ba785aae5ae1e92283729ce3768c" exitCode=0 Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020941 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerDied","Data":"bc6744f09138f5aa87c11faadd70077d0a62ba785aae5ae1e92283729ce3768c"} Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerStarted","Data":"402cead6fc56aaa5adc0f7ecbd14bf2fe1010dfdb7732a80d93f22e151d3d5d5"} Feb 17 16:09:52 crc kubenswrapper[4829]: I0217 16:09:52.424942 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:52 crc kubenswrapper[4829]: I0217 16:09:52.425234 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:53 crc kubenswrapper[4829]: I0217 16:09:53.059040 4829 generic.go:334] "Generic (PLEG): container finished" podID="11288751-f708-4745-96fa-625be709d265" containerID="f0f1933635205a797290236ef1808afed82485d095a4bc966936f5165644cd68" exitCode=0 Feb 17 16:09:53 crc kubenswrapper[4829]: I0217 16:09:53.059096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerDied","Data":"f0f1933635205a797290236ef1808afed82485d095a4bc966936f5165644cd68"} Feb 17 16:09:54 crc kubenswrapper[4829]: I0217 16:09:54.070600 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerStarted","Data":"0a5d2598b77ae8e825ac5d8cf1c1b53ecf7814c96e5f7aaf259f43223f8d6a78"} Feb 17 16:09:54 crc kubenswrapper[4829]: I0217 16:09:54.091762 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgnph" podStartSLOduration=2.6425731580000003 podStartE2EDuration="8.091739638s" podCreationTimestamp="2026-02-17 16:09:46 +0000 UTC" firstStartedPulling="2026-02-17 16:09:48.023861089 +0000 UTC m=+900.440879107" lastFinishedPulling="2026-02-17 16:09:53.473027609 +0000 UTC m=+905.890045587" observedRunningTime="2026-02-17 16:09:54.084867112 +0000 UTC m=+906.501885160" watchObservedRunningTime="2026-02-17 16:09:54.091739638 +0000 UTC m=+906.508757656" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.079619 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.080011 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.146777 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.922864 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.924119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.929693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930061 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-72v7n" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930126 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930180 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930413 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.933918 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.936703 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.082240 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:03 crc kubenswrapper[4829]: E0217 16:10:03.083015 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-pr2kc metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-mrvfp" podUID="ee08f929-2d75-418a-ba47-8f64355f622d" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107367 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107447 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107496 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107593 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107659 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.141925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.151102 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.208939 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.208998 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209022 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209086 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210967 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.211543 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.214550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.215209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.215922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.216911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.229815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.266957 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411478 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411799 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411865 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411942 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411995 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412050 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412076 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412099 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412225 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412478 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir" (OuterVolumeSpecName: "datadir") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412927 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412940 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config" (OuterVolumeSpecName: "config") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.413043 4829 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.413078 4829 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.415438 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token" (OuterVolumeSpecName: "collector-token") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.415908 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token" (OuterVolumeSpecName: "sa-token") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.416035 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.420760 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp" (OuterVolumeSpecName: "tmp") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.421351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc" (OuterVolumeSpecName: "kube-api-access-pr2kc") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "kube-api-access-pr2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.421737 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics" (OuterVolumeSpecName: "metrics") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514293 4829 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514594 4829 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514665 4829 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514734 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514791 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514850 4829 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514911 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.515048 4829 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.515109 4829 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.149153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.216102 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.225303 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.239280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.240888 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.242560 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.251341 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.252868 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-72v7n" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.253782 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.253970 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.254092 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.262874 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.287421 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee08f929-2d75-418a-ba47-8f64355f622d" path="/var/lib/kubelet/pods/ee08f929-2d75-418a-ba47-8f64355f622d/volumes" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429513 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429584 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429664 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429696 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429723 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429848 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429902 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.430005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531624 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531745 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531816 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531898 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531981 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532015 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532100 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532714 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533173 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533425 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.537524 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.538557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.538651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.540059 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.556180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.561976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.564140 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-j7l9k" Feb 17 16:10:05 crc kubenswrapper[4829]: I0217 16:10:05.065761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:05 crc kubenswrapper[4829]: I0217 16:10:05.159762 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-j7l9k" event={"ID":"768f24d9-7e75-4b78-a2a7-10cdfd579577","Type":"ContainerStarted","Data":"bb7dd5c19deab8329594890322ef7efbc4b543d2f9f2f9ccf829c4d3ec8957e7"} Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.174030 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.270809 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.306043 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.306293 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqfvj" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" containerID="cri-o://2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" gracePeriod=2 Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.697931 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.825840 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.825923 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.826034 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.826966 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities" (OuterVolumeSpecName: "utilities") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.833349 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj" (OuterVolumeSpecName: "kube-api-access-fcbhj") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "kube-api-access-fcbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.885530 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927914 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927957 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927969 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.198797 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" exitCode=0 Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199058 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4"} Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199073 4829 scope.go:117] "RemoveContainer" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199167 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.225260 4829 scope.go:117] "RemoveContainer" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.231169 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.241015 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.288559 4829 scope.go:117] "RemoveContainer" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.290213 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" path="/var/lib/kubelet/pods/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3/volumes" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336210 4829 scope.go:117] "RemoveContainer" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.336599 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": container with ID starting with 2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06 not found: ID does not exist" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336622 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} err="failed to get container status \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": rpc error: code = NotFound desc = could not find container \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": container with ID starting with 2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06 not found: ID does not exist" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336640 4829 scope.go:117] "RemoveContainer" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.337629 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": container with ID starting with 2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21 not found: ID does not exist" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.337652 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} err="failed to get container status \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": rpc error: code = NotFound desc = could not find container \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": container with ID starting with 2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21 not found: ID does not exist" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.337664 4829 scope.go:117] "RemoveContainer" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.338124 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": container with ID starting with 2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325 not found: ID does not exist" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.338168 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325"} err="failed to get container status \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": rpc error: code = NotFound desc = could not find container \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": container with ID starting with 2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325 not found: ID does not exist" Feb 17 16:10:14 crc kubenswrapper[4829]: I0217 16:10:14.293703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-j7l9k" event={"ID":"768f24d9-7e75-4b78-a2a7-10cdfd579577","Type":"ContainerStarted","Data":"37ad35872a9cc39af81a394d4803d6aa082192a133ee08b01812243e5e65f745"} Feb 17 16:10:14 crc kubenswrapper[4829]: I0217 16:10:14.304104 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-j7l9k" podStartSLOduration=1.663946756 podStartE2EDuration="10.304083816s" podCreationTimestamp="2026-02-17 16:10:04 +0000 UTC" firstStartedPulling="2026-02-17 16:10:05.077531932 +0000 UTC m=+917.494549910" lastFinishedPulling="2026-02-17 16:10:13.717668992 +0000 UTC m=+926.134686970" observedRunningTime="2026-02-17 16:10:14.300184619 +0000 UTC m=+926.717202597" watchObservedRunningTime="2026-02-17 16:10:14.304083816 +0000 UTC m=+926.721101804" Feb 17 16:10:22 crc kubenswrapper[4829]: I0217 16:10:22.425376 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:10:22 crc kubenswrapper[4829]: I0217 16:10:22.425953 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450233 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450879 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-utilities" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450892 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-utilities" Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450904 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-content" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450910 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-content" Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450920 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450926 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.451082 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.452046 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.454276 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.468940 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488692 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488788 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590709 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590808 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.591382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.591386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.617684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.770965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.080159 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515126 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="874a55bc34adca66ed5a7c0d077eab2f9ade225a0e42b28ec2051f629c6eea06" exitCode=0 Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"874a55bc34adca66ed5a7c0d077eab2f9ade225a0e42b28ec2051f629c6eea06"} Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerStarted","Data":"e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83"} Feb 17 16:10:48 crc kubenswrapper[4829]: I0217 16:10:48.540408 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="9a49718063f82a427a5de708cd484941a8be3c9835d6a16237ffe32ce44354d6" exitCode=0 Feb 17 16:10:48 crc kubenswrapper[4829]: I0217 16:10:48.540525 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"9a49718063f82a427a5de708cd484941a8be3c9835d6a16237ffe32ce44354d6"} Feb 17 16:10:49 crc kubenswrapper[4829]: I0217 16:10:49.551986 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="347d214a1f469ad7a36586def45e331e743cf878e189bb10837deda08ea995d7" exitCode=0 Feb 17 16:10:49 crc kubenswrapper[4829]: I0217 16:10:49.552028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"347d214a1f469ad7a36586def45e331e743cf878e189bb10837deda08ea995d7"} Feb 17 16:10:50 crc kubenswrapper[4829]: I0217 16:10:50.905880 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105119 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105233 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105297 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105876 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle" (OuterVolumeSpecName: "bundle") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.106221 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.116074 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util" (OuterVolumeSpecName: "util") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.116984 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn" (OuterVolumeSpecName: "kube-api-access-qsdxn") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "kube-api-access-qsdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.211843 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.211887 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573351 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83"} Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573402 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573449 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.424895 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.424991 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.425064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.426125 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.426285 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" gracePeriod=600 Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.590493 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" exitCode=0 Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.590810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.591078 4829 scope.go:117] "RemoveContainer" containerID="ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" Feb 17 16:10:53 crc kubenswrapper[4829]: I0217 16:10:53.602366 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.945637 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946313 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946341 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="util" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946347 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="util" Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946355 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="pull" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946362 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="pull" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946481 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.947106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.949823 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.949970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gp7nj" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.950387 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.966799 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.073913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.175248 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.201563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.279224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.679399 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:55 crc kubenswrapper[4829]: W0217 16:10:55.680697 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode597d80c_fb6d_45a3_9b01_4a32a59f07a6.slice/crio-b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287 WatchSource:0}: Error finding container b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287: Status 404 returned error can't find the container with id b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287 Feb 17 16:10:56 crc kubenswrapper[4829]: I0217 16:10:56.629153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" event={"ID":"e597d80c-fb6d-45a3-9b01-4a32a59f07a6","Type":"ContainerStarted","Data":"b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287"} Feb 17 16:10:58 crc kubenswrapper[4829]: I0217 16:10:58.668430 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" event={"ID":"e597d80c-fb6d-45a3-9b01-4a32a59f07a6","Type":"ContainerStarted","Data":"039dbb88fab254603228749cbe5085cc9e2ef51e16d9e59f8315746a75e706b7"} Feb 17 16:10:58 crc kubenswrapper[4829]: I0217 16:10:58.686563 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" podStartSLOduration=2.723690505 podStartE2EDuration="4.686544145s" podCreationTimestamp="2026-02-17 16:10:54 +0000 UTC" firstStartedPulling="2026-02-17 16:10:55.684047405 +0000 UTC m=+968.101065383" lastFinishedPulling="2026-02-17 16:10:57.646901025 +0000 UTC m=+970.063919023" observedRunningTime="2026-02-17 16:10:58.682645519 +0000 UTC m=+971.099663497" watchObservedRunningTime="2026-02-17 16:10:58.686544145 +0000 UTC m=+971.103562133" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.484684 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.486952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.488398 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g6zcq" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.491121 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.492237 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.493546 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.501608 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.520346 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.551426 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-47lp4"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.553074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647587 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647652 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647741 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647807 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.652869 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.653876 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.657234 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.657392 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-x5nwp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.672450 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.674847 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750454 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750617 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750635 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750980 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.751332 4829 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.751380 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair podName:55a7b0a0-24f0-4b6b-82bf-f131f831af3a nodeName:}" failed. No retries permitted until 2026-02-17 16:11:06.251362249 +0000 UTC m=+978.668380227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair") pod "nmstate-webhook-866bcb46dc-v2bww" (UID: "55a7b0a0-24f0-4b6b-82bf-f131f831af3a") : secret "openshift-nmstate-webhook" not found Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.751619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.751648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.788533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.802863 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.803010 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.813011 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857478 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857623 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.857785 4829 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.857839 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert podName:df7e3d75-f36c-4258-ae86-6bb72db7c0e4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:06.35782332 +0000 UTC m=+978.774841298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-mchvp" (UID: "df7e3d75-f36c-4258-ae86-6bb72db7c0e4") : secret "plugin-serving-cert" not found Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.859154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.869342 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.910253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.940426 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.943860 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.976066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067415 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067471 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067638 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067710 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067728 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067745 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169236 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169262 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169285 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169366 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170522 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.173040 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.173178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.187045 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.271258 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.274773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.308369 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.349846 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.373206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.376871 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.425229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.575648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.732112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"908d77668dd9f13bf54ca68f6bc92a171a53518d505cbec033eff4cacdd9303d"} Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.734070 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-47lp4" event={"ID":"4e62a7c0-ac99-4dd8-a587-58c98adb3a25","Type":"ContainerStarted","Data":"cad8acfbdb19eee6f9c474f995a0155668bd17c0d5d0ea98b7bb7f5af5a20f25"} Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.805378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.813489 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc453fb9_9d54_4441_bcae_64e34e837dac.slice/crio-1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7 WatchSource:0}: Error finding container 1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7: Status 404 returned error can't find the container with id 1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7 Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.834034 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.841155 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf7e3d75_f36c_4258_ae86_6bb72db7c0e4.slice/crio-afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093 WatchSource:0}: Error finding container afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093: Status 404 returned error can't find the container with id afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093 Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.911188 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.915468 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a7b0a0_24f0_4b6b_82bf_f131f831af3a.slice/crio-2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16 WatchSource:0}: Error finding container 2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16: Status 404 returned error can't find the container with id 2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16 Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.754318 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" event={"ID":"df7e3d75-f36c-4258-ae86-6bb72db7c0e4","Type":"ContainerStarted","Data":"afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.755975 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerStarted","Data":"76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.756037 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerStarted","Data":"1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.758035 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" event={"ID":"55a7b0a0-24f0-4b6b-82bf-f131f831af3a","Type":"ContainerStarted","Data":"2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.777025 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-864565556d-824bj" podStartSLOduration=2.777009385 podStartE2EDuration="2.777009385s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:11:07.773627815 +0000 UTC m=+980.190645793" watchObservedRunningTime="2026-02-17 16:11:07.777009385 +0000 UTC m=+980.194027363" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.774408 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" event={"ID":"55a7b0a0-24f0-4b6b-82bf-f131f831af3a","Type":"ContainerStarted","Data":"01de5783cf50eb53fa7c3d3fd4fb4448a4082b23f3514cafab3f491b4bced204"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.774865 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.777232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-47lp4" event={"ID":"4e62a7c0-ac99-4dd8-a587-58c98adb3a25","Type":"ContainerStarted","Data":"7396e859466a78f066ed44e70b88be1c92bbfc1fb80fadb3b24d6388370c6b94"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.777321 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.778801 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"6edf72e5ac8b699491eb0f520f374a3d61fcaa48fa6b585a0a16b80c72be6ba9"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.792987 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" podStartSLOduration=2.883003023 podStartE2EDuration="4.792972773s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.918016197 +0000 UTC m=+979.335034175" lastFinishedPulling="2026-02-17 16:11:08.827985907 +0000 UTC m=+981.245003925" observedRunningTime="2026-02-17 16:11:09.790048535 +0000 UTC m=+982.207066503" watchObservedRunningTime="2026-02-17 16:11:09.792972773 +0000 UTC m=+982.209990751" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.819244 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-47lp4" podStartSLOduration=1.937279932 podStartE2EDuration="4.819219533s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:05.94435764 +0000 UTC m=+978.361375618" lastFinishedPulling="2026-02-17 16:11:08.826297231 +0000 UTC m=+981.243315219" observedRunningTime="2026-02-17 16:11:09.814313522 +0000 UTC m=+982.231331500" watchObservedRunningTime="2026-02-17 16:11:09.819219533 +0000 UTC m=+982.236237511" Feb 17 16:11:10 crc kubenswrapper[4829]: I0217 16:11:10.789247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" event={"ID":"df7e3d75-f36c-4258-ae86-6bb72db7c0e4","Type":"ContainerStarted","Data":"30d9f08bd040a55f8cb65c9f090bd8a0eafe1566a713ce987b8e0ef5cfd18678"} Feb 17 16:11:10 crc kubenswrapper[4829]: I0217 16:11:10.809488 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" podStartSLOduration=2.750207731 podStartE2EDuration="5.809464984s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.843876599 +0000 UTC m=+979.260894577" lastFinishedPulling="2026-02-17 16:11:09.903133832 +0000 UTC m=+982.320151830" observedRunningTime="2026-02-17 16:11:10.801631444 +0000 UTC m=+983.218649452" watchObservedRunningTime="2026-02-17 16:11:10.809464984 +0000 UTC m=+983.226482962" Feb 17 16:11:11 crc kubenswrapper[4829]: I0217 16:11:11.800868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"577e20ad2933f746b58851298d6006c06b5241e2355d47469f8202e1eb05b0a8"} Feb 17 16:11:11 crc kubenswrapper[4829]: I0217 16:11:11.828056 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" podStartSLOduration=1.645992547 podStartE2EDuration="6.828017129s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.366444156 +0000 UTC m=+978.783462154" lastFinishedPulling="2026-02-17 16:11:11.548468718 +0000 UTC m=+983.965486736" observedRunningTime="2026-02-17 16:11:11.823026786 +0000 UTC m=+984.240044784" watchObservedRunningTime="2026-02-17 16:11:11.828017129 +0000 UTC m=+984.245035117" Feb 17 16:11:15 crc kubenswrapper[4829]: I0217 16:11:15.898211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.309374 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.309434 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.315770 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.847949 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.914069 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:26 crc kubenswrapper[4829]: I0217 16:11:26.438102 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:41 crc kubenswrapper[4829]: I0217 16:11:41.977635 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-797db4bf78-znlsn" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" containerID="cri-o://bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" gracePeriod=15 Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.527122 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-797db4bf78-znlsn_6fa156f6-505b-4ad3-b8e7-b66291338bc9/console/0.log" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.527447 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.617405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.617951 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618120 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618161 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618246 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618977 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config" (OuterVolumeSpecName: "console-config") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619168 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619747 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca" (OuterVolumeSpecName: "service-ca") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620090 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620106 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620119 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620128 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.624443 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.633236 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr" (OuterVolumeSpecName: "kube-api-access-9wmkr") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "kube-api-access-9wmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.633509 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722154 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722384 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722396 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-797db4bf78-znlsn_6fa156f6-505b-4ad3-b8e7-b66291338bc9/console/0.log" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091179 4829 generic.go:334] "Generic (PLEG): container finished" podID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" exitCode=2 Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091224 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerDied","Data":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091264 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerDied","Data":"bfae83dcdb0a183b25666f792e4baf03784ae0581990e298c8186a70a2bee65f"} Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091292 4829 scope.go:117] "RemoveContainer" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091497 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.130783 4829 scope.go:117] "RemoveContainer" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: E0217 16:11:43.132025 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": container with ID starting with bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e not found: ID does not exist" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.132078 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} err="failed to get container status \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": rpc error: code = NotFound desc = could not find container \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": container with ID starting with bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e not found: ID does not exist" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.137118 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.141781 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:44 crc kubenswrapper[4829]: I0217 16:11:44.291962 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" path="/var/lib/kubelet/pods/6fa156f6-505b-4ad3-b8e7-b66291338bc9/volumes" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.119525 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: E0217 16:11:48.120491 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.120513 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.120885 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.123231 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.133977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.140311 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216138 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318397 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318466 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318503 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.319256 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.319490 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.351824 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.448290 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.456660 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.918990 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: W0217 16:11:48.928231 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ecbb28_5618_4f33_9125_c0372c407b89.slice/crio-72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7 WatchSource:0}: Error finding container 72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7: Status 404 returned error can't find the container with id 72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7 Feb 17 16:11:49 crc kubenswrapper[4829]: I0217 16:11:49.148879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerStarted","Data":"e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f"} Feb 17 16:11:49 crc kubenswrapper[4829]: I0217 16:11:49.149158 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerStarted","Data":"72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7"} Feb 17 16:11:50 crc kubenswrapper[4829]: I0217 16:11:50.170865 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f" exitCode=0 Feb 17 16:11:50 crc kubenswrapper[4829]: I0217 16:11:50.171196 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f"} Feb 17 16:11:53 crc kubenswrapper[4829]: I0217 16:11:53.195954 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="595ccae63b4f2be9a50ce2e039446a2c09503ab4c57fe55384f3b7577856f2f5" exitCode=0 Feb 17 16:11:53 crc kubenswrapper[4829]: I0217 16:11:53.196065 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"595ccae63b4f2be9a50ce2e039446a2c09503ab4c57fe55384f3b7577856f2f5"} Feb 17 16:11:54 crc kubenswrapper[4829]: I0217 16:11:54.205016 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="8f9ea8944c3ea357e608b23d3e385077f9d06f003cc95e5fb8fddac21c046991" exitCode=0 Feb 17 16:11:54 crc kubenswrapper[4829]: I0217 16:11:54.205061 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"8f9ea8944c3ea357e608b23d3e385077f9d06f003cc95e5fb8fddac21c046991"} Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.567159 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647914 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647949 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.648860 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle" (OuterVolumeSpecName: "bundle") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.654667 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h" (OuterVolumeSpecName: "kube-api-access-68t8h") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "kube-api-access-68t8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.665652 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util" (OuterVolumeSpecName: "util") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750911 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750966 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750987 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7"} Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227871 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.837897 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838687 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="util" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838700 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="util" Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838712 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="pull" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838718 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="pull" Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838740 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838746 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838862 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.839376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.841186 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.850880 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852504 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852591 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xzx6f" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852655 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.857452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971300 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.072895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.072986 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.073048 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.080519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.080655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.081782 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.085176 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087034 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087240 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mjkpp" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087428 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.103279 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.114356 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.158613 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.173725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.174068 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.174122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275062 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275127 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.281677 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.292359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.301417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.473629 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.649123 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.982710 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: W0217 16:12:07.987960 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90b368e2_73a9_4594_8428_e17a7bb1e499.slice/crio-dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e WatchSource:0}: Error finding container dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e: Status 404 returned error can't find the container with id dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e Feb 17 16:12:08 crc kubenswrapper[4829]: I0217 16:12:08.337007 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" event={"ID":"c5cf20c6-9fae-4c85-9c16-53e313c04cda","Type":"ContainerStarted","Data":"2b0410ba236172b8a0e4828a66fd1d5b9725a457e8a70eb39b1fc87534f20fa6"} Feb 17 16:12:08 crc kubenswrapper[4829]: I0217 16:12:08.339378 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" event={"ID":"90b368e2-73a9-4594-8428-e17a7bb1e499","Type":"ContainerStarted","Data":"dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e"} Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.362237 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" event={"ID":"c5cf20c6-9fae-4c85-9c16-53e313c04cda","Type":"ContainerStarted","Data":"586ba4aa8780242b2c8d89354a083d24911e53f5e530276a1cdc345f3f39f253"} Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.363779 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.389281 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" podStartSLOduration=2.008957725 podStartE2EDuration="5.389258577s" podCreationTimestamp="2026-02-17 16:12:06 +0000 UTC" firstStartedPulling="2026-02-17 16:12:07.671063735 +0000 UTC m=+1040.088081713" lastFinishedPulling="2026-02-17 16:12:11.051364587 +0000 UTC m=+1043.468382565" observedRunningTime="2026-02-17 16:12:11.379216858 +0000 UTC m=+1043.796234836" watchObservedRunningTime="2026-02-17 16:12:11.389258577 +0000 UTC m=+1043.806276555" Feb 17 16:12:13 crc kubenswrapper[4829]: I0217 16:12:13.393860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" event={"ID":"90b368e2-73a9-4594-8428-e17a7bb1e499","Type":"ContainerStarted","Data":"30941ca2c2a4ab1dbc253a918d2e520afd56f2324ae307cbfda9f40ad1132d02"} Feb 17 16:12:13 crc kubenswrapper[4829]: I0217 16:12:13.394212 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:27 crc kubenswrapper[4829]: I0217 16:12:27.501615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:27 crc kubenswrapper[4829]: I0217 16:12:27.537491 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" podStartSLOduration=15.895978829 podStartE2EDuration="20.537472363s" podCreationTimestamp="2026-02-17 16:12:07 +0000 UTC" firstStartedPulling="2026-02-17 16:12:07.993653015 +0000 UTC m=+1040.410671003" lastFinishedPulling="2026-02-17 16:12:12.635146549 +0000 UTC m=+1045.052164537" observedRunningTime="2026-02-17 16:12:13.419180286 +0000 UTC m=+1045.836198264" watchObservedRunningTime="2026-02-17 16:12:27.537472363 +0000 UTC m=+1059.954490341" Feb 17 16:12:47 crc kubenswrapper[4829]: I0217 16:12:47.162476 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.192649 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7qwft"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.195617 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.199627 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.199646 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w5psx" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.205274 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.210143 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.211248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.217791 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.218345 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.320439 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8gr6k"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.322222 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zzhzt" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327249 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327429 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.344481 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.346175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.348758 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.364713 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366870 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366901 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366964 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367077 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468322 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468388 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468489 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468511 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468548 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468628 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468674 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468730 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468785 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468822 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470450 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470637 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.471084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.471176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.475235 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.475235 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.476808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.481002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.481069 4829 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.481112 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs podName:901c7cfc-f3f1-470c-bd1f-47ab57bb1b53 nodeName:}" failed. No retries permitted until 2026-02-17 16:12:48.981100151 +0000 UTC m=+1081.398118129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs") pod "frr-k8s-7qwft" (UID: "901c7cfc-f3f1-470c-bd1f-47ab57bb1b53") : secret "frr-k8s-certs-secret" not found Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.489310 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.504121 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.572660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.572729 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573268 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573288 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.573369 4829 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.573422 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist podName:a25680cc-e984-4ad7-95e2-3fe561a5fa8c nodeName:}" failed. No retries permitted until 2026-02-17 16:12:49.073407545 +0000 UTC m=+1081.490425523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist") pod "speaker-8gr6k" (UID: "a25680cc-e984-4ad7-95e2-3fe561a5fa8c") : secret "metallb-memberlist" not found Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573375 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573463 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.574468 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.580843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.581102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.581345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.583279 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w5psx" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.588178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.588356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.591491 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.591820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.660866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.993861 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.999014 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:49 crc kubenswrapper[4829]: W0217 16:12:49.055160 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ddfc374_12f8_443a_bcc1_526613e031bf.slice/crio-b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f WatchSource:0}: Error finding container b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f: Status 404 returned error can't find the container with id b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.057640 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.058216 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.095945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:49 crc kubenswrapper[4829]: E0217 16:12:49.096094 4829 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:12:49 crc kubenswrapper[4829]: E0217 16:12:49.096146 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist podName:a25680cc-e984-4ad7-95e2-3fe561a5fa8c nodeName:}" failed. No retries permitted until 2026-02-17 16:12:50.096132737 +0000 UTC m=+1082.513150715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist") pod "speaker-8gr6k" (UID: "a25680cc-e984-4ad7-95e2-3fe561a5fa8c") : secret "metallb-memberlist" not found Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.136511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:49 crc kubenswrapper[4829]: W0217 16:12:49.141444 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1da62b69_54b6_4041_885f_acda828405c9.slice/crio-317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8 WatchSource:0}: Error finding container 317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8: Status 404 returned error can't find the container with id 317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8 Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.156197 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.779252 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" event={"ID":"8ddfc374-12f8-443a-bcc1-526613e031bf","Type":"ContainerStarted","Data":"b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782625 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"1f2b4d973a38190c89afc29f0404e56be82795fa6683effe3aa96ddfcaa047d7"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"ed0f7057f2dd25efde919280825925dc683bd3674509d9c4a96f4c60a7d6bcf5"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.783030 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.783916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"8d5e40f95b8b32b0e4659116a384009375ce7f0a242497af27a6ecf9f27201a2"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.810694 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-g4znl" podStartSLOduration=1.8106727089999999 podStartE2EDuration="1.810672709s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:12:49.80249101 +0000 UTC m=+1082.219508998" watchObservedRunningTime="2026-02-17 16:12:49.810672709 +0000 UTC m=+1082.227690697" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.140532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.149079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.445148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: W0217 16:12:50.492755 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25680cc_e984_4ad7_95e2_3fe561a5fa8c.slice/crio-706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6 WatchSource:0}: Error finding container 706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6: Status 404 returned error can't find the container with id 706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6 Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.802584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"e8880e7320f84ab2c9dbdc4a1ce02de55071649f1b72fe7eb03867b5e90bff76"} Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.803575 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6"} Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.818303 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"8437d6e9c831510743064901310618af296374f0903064abe7e5a40242e2b96e"} Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.818425 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.841048 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8gr6k" podStartSLOduration=3.841032461 podStartE2EDuration="3.841032461s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:12:51.838300608 +0000 UTC m=+1084.255318586" watchObservedRunningTime="2026-02-17 16:12:51.841032461 +0000 UTC m=+1084.258050439" Feb 17 16:12:52 crc kubenswrapper[4829]: I0217 16:12:52.424818 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:52 crc kubenswrapper[4829]: I0217 16:12:52.424876 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.871967 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" event={"ID":"8ddfc374-12f8-443a-bcc1-526613e031bf","Type":"ContainerStarted","Data":"00836c2fbad67147f5669bc2e2110be71ba1eb87ab8b6c03f17d00b665ad892e"} Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874122 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874630 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="0e3ca35e5382f1b19ce9e6905d010989593420d7ecacee9dba37295db690f677" exitCode=0 Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"0e3ca35e5382f1b19ce9e6905d010989593420d7ecacee9dba37295db690f677"} Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.906529 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" podStartSLOduration=2.104617665 podStartE2EDuration="9.906496352s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.057294231 +0000 UTC m=+1081.474312219" lastFinishedPulling="2026-02-17 16:12:56.859172928 +0000 UTC m=+1089.276190906" observedRunningTime="2026-02-17 16:12:57.899035612 +0000 UTC m=+1090.316053630" watchObservedRunningTime="2026-02-17 16:12:57.906496352 +0000 UTC m=+1090.323514370" Feb 17 16:12:58 crc kubenswrapper[4829]: I0217 16:12:58.897331 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="682ae7384a37d88e27884ddce5f3b338f9aa4fc29ac807fdbbb7139c0cb56e6f" exitCode=0 Feb 17 16:12:58 crc kubenswrapper[4829]: I0217 16:12:58.898657 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"682ae7384a37d88e27884ddce5f3b338f9aa4fc29ac807fdbbb7139c0cb56e6f"} Feb 17 16:12:59 crc kubenswrapper[4829]: I0217 16:12:59.905760 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="d6ebf9b0c6b3aa3c2de9a8e95d635483695be50bf07e29cf4a1d04a743aa6113" exitCode=0 Feb 17 16:12:59 crc kubenswrapper[4829]: I0217 16:12:59.905847 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"d6ebf9b0c6b3aa3c2de9a8e95d635483695be50bf07e29cf4a1d04a743aa6113"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.449280 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gr6k" Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.917809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"463ab0bbb16bb92261c15e48f9ae939fb135ebcb5f3df50b11d1cbd134fcf318"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918099 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"05478e29648db01f7b6c736aa5a45a4903a2ab55899a73fa68c92fd5bb871b3a"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"7620e4186e15ecc26087bf64d5d082c690cd3a3c7702b0f1bc3c289869be07d5"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"9d2be24f1bf8eddc184ee056770427a8ecbbf4a7d83a3a1059d16c84f6231fb3"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"420e8a9a9375d5c975e01310feff342eccd9a4b0f903e3093ef5d7b3aab9963e"} Feb 17 16:13:01 crc kubenswrapper[4829]: I0217 16:13:01.939631 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"fa476528f7c96eb7e1517034a9892a14173128c6cc9bdf2a801c712232fddea2"} Feb 17 16:13:01 crc kubenswrapper[4829]: I0217 16:13:01.939978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.157161 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.215671 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.250586 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7qwft" podStartSLOduration=8.802922727 podStartE2EDuration="16.250550949s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.388140641 +0000 UTC m=+1081.805158639" lastFinishedPulling="2026-02-17 16:12:56.835768883 +0000 UTC m=+1089.252786861" observedRunningTime="2026-02-17 16:13:01.964874832 +0000 UTC m=+1094.381892820" watchObservedRunningTime="2026-02-17 16:13:04.250550949 +0000 UTC m=+1096.667568927" Feb 17 16:13:08 crc kubenswrapper[4829]: I0217 16:13:08.625693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:13:08 crc kubenswrapper[4829]: I0217 16:13:08.664940 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.082219 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.084501 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.087884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mrxbp" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.089315 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.090025 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.097041 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.208560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.310717 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.327375 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.423502 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.874304 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:10 crc kubenswrapper[4829]: I0217 16:13:10.017283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6p47w" event={"ID":"24ddb2b4-4194-4df5-8820-9ea9c405abc7","Type":"ContainerStarted","Data":"e0a0ac14a9ec77ff26e9edd15a2139a3e52e6d3468e83e1b4ee855db09b3b565"} Feb 17 16:13:16 crc kubenswrapper[4829]: I0217 16:13:16.089230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6p47w" event={"ID":"24ddb2b4-4194-4df5-8820-9ea9c405abc7","Type":"ContainerStarted","Data":"455e387075a05389a7b37c16dcbfa2b06e409760fcb396e9c51a87427e0fbc02"} Feb 17 16:13:16 crc kubenswrapper[4829]: I0217 16:13:16.117076 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6p47w" podStartSLOduration=1.680429666 podStartE2EDuration="7.117047498s" podCreationTimestamp="2026-02-17 16:13:09 +0000 UTC" firstStartedPulling="2026-02-17 16:13:09.890467287 +0000 UTC m=+1102.307485305" lastFinishedPulling="2026-02-17 16:13:15.327085119 +0000 UTC m=+1107.744103137" observedRunningTime="2026-02-17 16:13:16.107377305 +0000 UTC m=+1108.524395323" watchObservedRunningTime="2026-02-17 16:13:16.117047498 +0000 UTC m=+1108.534065506" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.161643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.424350 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.424728 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.468881 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:20 crc kubenswrapper[4829]: I0217 16:13:20.177260 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.503126 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.505677 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.507734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-27r92" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.513193 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.639995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.640117 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.640312 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742369 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742410 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742928 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.743209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.775567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.824460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.361427 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.425002 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.425068 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163439 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="9f874b6512a76eca1a3bf4f47a6e9cb2321418a3f501b2e13072fb2895b465e7" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163474 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"9f874b6512a76eca1a3bf4f47a6e9cb2321418a3f501b2e13072fb2895b465e7"} Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerStarted","Data":"7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2"} Feb 17 16:13:24 crc kubenswrapper[4829]: I0217 16:13:24.177629 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="4a7c39e048d790718740f3991e6cd1b7b2ff97312edb34c4e151b35c42537a78" exitCode=0 Feb 17 16:13:24 crc kubenswrapper[4829]: I0217 16:13:24.177709 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"4a7c39e048d790718740f3991e6cd1b7b2ff97312edb34c4e151b35c42537a78"} Feb 17 16:13:25 crc kubenswrapper[4829]: I0217 16:13:25.191231 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="01abad8c7a5bbcf5ec651f969643efcad42c80a6f82f3f6928f791cc2511528c" exitCode=0 Feb 17 16:13:25 crc kubenswrapper[4829]: I0217 16:13:25.191291 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"01abad8c7a5bbcf5ec651f969643efcad42c80a6f82f3f6928f791cc2511528c"} Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.544864 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657708 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.658442 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle" (OuterVolumeSpecName: "bundle") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.663384 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc" (OuterVolumeSpecName: "kube-api-access-pvmjc") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "kube-api-access-pvmjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.678461 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util" (OuterVolumeSpecName: "util") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759182 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759224 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759236 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2"} Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208428 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2" Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208448 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.270996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271542 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="util" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271555 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="util" Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271584 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271591 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271604 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="pull" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271611 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="pull" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271752 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.272240 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.286173 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-b4s9w" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.314116 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.440436 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.541892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.558424 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.590026 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:32 crc kubenswrapper[4829]: I0217 16:13:32.069187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:32 crc kubenswrapper[4829]: I0217 16:13:32.252031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" event={"ID":"f5adeb4d-89fb-480c-a429-7cf978198db2","Type":"ContainerStarted","Data":"e8d67f405e6f576148e50ad2ca806792dc299f6c5699fb2d26586da453a1e641"} Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.310085 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" event={"ID":"f5adeb4d-89fb-480c-a429-7cf978198db2","Type":"ContainerStarted","Data":"563df93fbb6d3252ec49b4cdb26cd800d557a0ce2f612159b6fe139e7241c2ff"} Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.310883 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.373130 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" podStartSLOduration=1.933274199 podStartE2EDuration="6.373101096s" podCreationTimestamp="2026-02-17 16:13:31 +0000 UTC" firstStartedPulling="2026-02-17 16:13:32.076806831 +0000 UTC m=+1124.493824799" lastFinishedPulling="2026-02-17 16:13:36.516633718 +0000 UTC m=+1128.933651696" observedRunningTime="2026-02-17 16:13:37.369724774 +0000 UTC m=+1129.786742812" watchObservedRunningTime="2026-02-17 16:13:37.373101096 +0000 UTC m=+1129.790119124" Feb 17 16:13:41 crc kubenswrapper[4829]: I0217 16:13:41.593891 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.424480 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.425163 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.425217 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.426065 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.426156 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" gracePeriod=600 Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.461689 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" exitCode=0 Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.461722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.462291 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.462321 4829 scope.go:117] "RemoveContainer" containerID="87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.788832 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.791040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.800144 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.808105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.812079 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r2fsv" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.822973 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bfc57" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.839568 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.875843 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.884690 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.886008 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.890603 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nbqhf" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.891353 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.892299 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.895011 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-479nq" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.921897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.921931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.929466 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.939989 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.961119 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.962248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.964332 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k8bsk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.984821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.020740 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.021936 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022893 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022947 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022973 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.025361 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xgrh4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.033114 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.036105 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.037316 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.040230 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h26n4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.043346 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.044379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.045444 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.046882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sld5q" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.060978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.076996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.078093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.079952 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.090035 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jv49f" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.090155 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.107638 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.115178 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.121530 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125029 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125160 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.127491 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.131294 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.132858 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.133846 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.138454 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8rf98" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.138751 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qmbqj" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.139924 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.144038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.150239 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.151255 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.154687 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.159977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zt6g9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.163359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.164487 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.165419 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.166460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.167059 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-w6krp" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.170761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.177057 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.184850 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.186653 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.189826 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tws64" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.191397 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.192470 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.201468 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.202541 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.207450 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.217879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.220977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.221773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-k4c7x" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.221986 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ms8s5" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.224995 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227261 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227303 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227349 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227386 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.236286 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.240229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.256165 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.265054 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.299113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.320643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322200 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322848 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.323297 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.327032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tbz7q" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329862 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329896 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329927 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329978 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330048 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330092 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330134 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.330319 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.330384 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:02.830363839 +0000 UTC m=+1155.247381857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330750 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mlj48" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.349715 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.350809 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.384376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.385247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.391013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.389870 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.426098 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431434 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431455 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431479 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.433079 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.433120 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:02.933106613 +0000 UTC m=+1155.350124591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.466292 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.467096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.473555 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.474271 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.474753 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.475833 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.478809 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6tdx8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.479477 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.484717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.508886 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.534250 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.534343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.555064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.558890 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.564388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.572806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.585692 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.586806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.589935 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ndn4t" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.590377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.603224 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.611964 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.619542 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.637691 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.647101 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.648154 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.649883 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.668067 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.694055 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rdq6s" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.704154 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.730776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.796234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.806195 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.810137 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.815745 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.818148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.829639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.846798 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.847058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.847204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gjtfw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.875470 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.878128 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.878873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.880361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.880706 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.880778 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.880758971 +0000 UTC m=+1156.297776949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.883953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bgxbx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.884399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.903713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.915366 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.919367 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.942029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987185 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987258 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987433 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.988025 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.988189 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.988127842 +0000 UTC m=+1156.405145850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.997358 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.005627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.073894 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.089361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.090744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.091029 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.091650 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.090967 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093015 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.592991115 +0000 UTC m=+1156.010009083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093057 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093222 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.593207021 +0000 UTC m=+1156.010224999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.122550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.129655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.210625 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.225914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.231669 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.580393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" event={"ID":"6084260e-35c2-43b5-9606-98e1e0463e98","Type":"ContainerStarted","Data":"d3410af211ad4c60c6f09d81b3076243ab1ee30ec2fa859ff503f169f38c3570"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.583724 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" event={"ID":"bb32d7a2-68ff-4511-a04f-fa09657791db","Type":"ContainerStarted","Data":"58f581f92c478154f509f0259f6584d596409df4463a4e75721952fa7b252733"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.587262 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" event={"ID":"f3add145-231f-4d7b-b9dd-115026b2a05e","Type":"ContainerStarted","Data":"85171fc1f119509fcc45e3b9bdfc6e138577d5189b233cd292c7574c61ee6e25"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.592171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" event={"ID":"a711806b-ee8c-4fb8-b5da-da5e90ef06c6","Type":"ContainerStarted","Data":"6f04c533082e9c2013e18960e0504788f17d3b4cbda263ec4c5601b14b35aa1f"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.598529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.598615 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598840 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598926 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:04.598884877 +0000 UTC m=+1157.015902855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598989 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.599018 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:04.59900606 +0000 UTC m=+1157.016024038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.617483 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.633510 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.658992 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.904194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.904370 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.904439 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:05.904422518 +0000 UTC m=+1158.321440496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.963600 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.999365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:04 crc kubenswrapper[4829]: W0217 16:14:04.003142 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8642cada_3458_43cc_90aa_cf66a1cd6426.slice/crio-db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e WatchSource:0}: Error finding container db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e: Status 404 returned error can't find the container with id db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.005310 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.005515 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.005554 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.005541368 +0000 UTC m=+1158.422559346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.005844 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.622810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.622853 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623111 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623173 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.62314158 +0000 UTC m=+1159.040159548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623564 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623617 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.623606112 +0000 UTC m=+1159.040624090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.630520 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.653702 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.672355 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" event={"ID":"60ea5425-d352-4d97-bedf-f01d07c89949","Type":"ContainerStarted","Data":"b25769481bc37e0a5f8c0e1d4fd84083842e28fd72bf6b2df8a783b9358600ea"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.688767 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" event={"ID":"62cfcaa0-5c8a-4a67-95b7-83aa695a8640","Type":"ContainerStarted","Data":"66070a0d3571614bcf2b5f12cf3c4fdc18a5c053996dd16f0fd1acb53fba5a4a"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.720637 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.738025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" event={"ID":"dd52262f-900a-4801-8c4c-f79787b6b715","Type":"ContainerStarted","Data":"f94c4995762de432a8368781f2bde5a94e5519d036b3006064f6fc1a581009c4"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.747433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" event={"ID":"f083cb81-0369-46de-9562-406736ae7e2f","Type":"ContainerStarted","Data":"efbb08583c96fefe42cb25a8046733c7e6fc5c4e228a4deac5dd9ef01ec42d49"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.806899 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.813034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" event={"ID":"84a22a6b-1fb5-4959-9342-0bcc4b033b68","Type":"ContainerStarted","Data":"7ac8aedda18ff4310549ae6c63829785bfb5a36530589d9cd2c9bcfa014b3702"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.845740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" event={"ID":"8642cada-3458-43cc-90aa-cf66a1cd6426","Type":"ContainerStarted","Data":"db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.847424 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:04 crc kubenswrapper[4829]: W0217 16:14:04.853837 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6c89f9_2c4f_4bab_8d8b_cd746acb3426.slice/crio-c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6 WatchSource:0}: Error finding container c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6: Status 404 returned error can't find the container with id c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6 Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.882131 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.893657 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.906746 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.915912 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.923273 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.909684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" event={"ID":"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0","Type":"ContainerStarted","Data":"33f9c70afe01e505a4f30007cf2c8d966f92fe5a38d82e008e1f730d77b6816c"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.926440 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" event={"ID":"584ed73b-c202-4d41-b884-cd9c279b3c0d","Type":"ContainerStarted","Data":"e07d17a09927d51e3271887e229f5ed2e371c90e8fd6b19d826a5fd16266c960"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.936786 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" event={"ID":"5239a5a9-e318-4db3-8394-0427d57d4ae5","Type":"ContainerStarted","Data":"1889e69af315b274f62d9360c799393e9edfaa0b671c5288315b1fb26ca98b98"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.938841 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" event={"ID":"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3","Type":"ContainerStarted","Data":"e2aed83c83cbf88c1bb273eeee622bc46b09921dc834970cc3c1ff38b10d42e2"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.943559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" event={"ID":"72028d3b-7fd0-4b17-b0c2-c92bc7134637","Type":"ContainerStarted","Data":"4d4751fed392a63d6b63f9ea9d8699bb2bd433fb65613425a69f784c537189cd"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.944354 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" event={"ID":"3aab9223-4e3f-4657-afc2-91d0e0948542","Type":"ContainerStarted","Data":"b1df749bc136c27e822d99a7a1a3f305efce19ae7529fced4d5026d65d634147"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.945014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" event={"ID":"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426","Type":"ContainerStarted","Data":"c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.945698 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" event={"ID":"eaf75815-7964-4bc0-aeae-d3306764d7f4","Type":"ContainerStarted","Data":"71b81c0e0364c4314eac35a90e09cea78ec835b4246f4483eccfb631eb8d9c6d"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.947170 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" event={"ID":"958dea67-d633-4f5c-a18e-2aca1a55020c","Type":"ContainerStarted","Data":"a255871753472c853813d1f36260ab099692af7e6f9a50753b92664e4e6f2c9c"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.959895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:05 crc kubenswrapper[4829]: E0217 16:14:05.960083 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:05 crc kubenswrapper[4829]: E0217 16:14:05.960122 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:09.960110258 +0000 UTC m=+1162.377128236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.967087 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" event={"ID":"2237138f-4450-415b-9646-c2ab9f88194a","Type":"ContainerStarted","Data":"8bf70cb13d0e908ecc6d38fc39a955e726af63a2a354c739ea093daf51cc0027"} Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.061227 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.061422 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.061499 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.061481557 +0000 UTC m=+1162.478499535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.675602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.675669 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.675906 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.675971 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.675953422 +0000 UTC m=+1163.092971400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.676409 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.676451 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.676440005 +0000 UTC m=+1163.093457983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:09 crc kubenswrapper[4829]: I0217 16:14:09.968210 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:09 crc kubenswrapper[4829]: E0217 16:14:09.968434 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:09 crc kubenswrapper[4829]: E0217 16:14:09.968735 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:17.968704344 +0000 UTC m=+1170.385722322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.070185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.070385 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.070474 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.070451072 +0000 UTC m=+1170.487469050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.681094 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.681166 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681313 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681340 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681386 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.68136714 +0000 UTC m=+1171.098385118 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681425 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.681403361 +0000 UTC m=+1171.098421359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.047750 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.056154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.149630 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.149879 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.149926 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:34.14991041 +0000 UTC m=+1186.566928398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.259033 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.259249 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4fmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-shssw_openstack-operators(a711806b-ee8c-4fb8-b5da-da5e90ef06c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.260622 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podUID="a711806b-ee8c-4fb8-b5da-da5e90ef06c6" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.338516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.763209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.763315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.763475 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.763652 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:34.763626605 +0000 UTC m=+1187.180644583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.782482 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.096592 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.096989 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzrfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-t57qn_openstack-operators(60ea5425-d352-4d97-bedf-f01d07c89949): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.098334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podUID="60ea5425-d352-4d97-bedf-f01d07c89949" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.158651 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podUID="a711806b-ee8c-4fb8-b5da-da5e90ef06c6" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.158688 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podUID="60ea5425-d352-4d97-bedf-f01d07c89949" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.032239 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.033366 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6v8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-thspt_openstack-operators(4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.034681 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podUID="4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.217243 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podUID="4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.797713 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.797966 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9kbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-m4df4_openstack-operators(3aab9223-4e3f-4657-afc2-91d0e0948542): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.799158 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podUID="3aab9223-4e3f-4657-afc2-91d0e0948542" Feb 17 16:14:23 crc kubenswrapper[4829]: E0217 16:14:23.191381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podUID="3aab9223-4e3f-4657-afc2-91d0e0948542" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.394317 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.394870 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ft2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-fw4gg_openstack-operators(8642cada-3458-43cc-90aa-cf66a1cd6426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.396148 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podUID="8642cada-3458-43cc-90aa-cf66a1cd6426" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.953397 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.953587 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jtqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-2xmzw_openstack-operators(5239a5a9-e318-4db3-8394-0427d57d4ae5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.955657 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podUID="5239a5a9-e318-4db3-8394-0427d57d4ae5" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.222341 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podUID="5239a5a9-e318-4db3-8394-0427d57d4ae5" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.223187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podUID="8642cada-3458-43cc-90aa-cf66a1cd6426" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.614492 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.614777 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jplzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-czbvb_openstack-operators(f083cb81-0369-46de-9562-406736ae7e2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.616193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podUID="f083cb81-0369-46de-9562-406736ae7e2f" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.236862 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podUID="f083cb81-0369-46de-9562-406736ae7e2f" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.270778 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.270978 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv5r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-274tg_openstack-operators(958dea67-d633-4f5c-a18e-2aca1a55020c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.272960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podUID="958dea67-d633-4f5c-a18e-2aca1a55020c" Feb 17 16:14:27 crc kubenswrapper[4829]: E0217 16:14:27.238600 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podUID="958dea67-d633-4f5c-a18e-2aca1a55020c" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.076843 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.077363 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g85r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-gcxk7_openstack-operators(5b6c89f9-2c4f-4bab-8d8b-cd746acb3426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.078977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podUID="5b6c89f9-2c4f-4bab-8d8b-cd746acb3426" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.266041 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podUID="5b6c89f9-2c4f-4bab-8d8b-cd746acb3426" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.208188 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.208606 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d74g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-zbs8b_openstack-operators(23c03a71-fe86-47ad-ae4b-dd49bc07f2b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.210105 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podUID="23c03a71-fe86-47ad-ae4b-dd49bc07f2b0" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.280314 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podUID="23c03a71-fe86-47ad-ae4b-dd49bc07f2b0" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.968619 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.969096 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6965,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-hmtfv_openstack-operators(84a22a6b-1fb5-4959-9342-0bcc4b033b68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.970356 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podUID="84a22a6b-1fb5-4959-9342-0bcc4b033b68" Feb 17 16:14:32 crc kubenswrapper[4829]: E0217 16:14:32.289127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podUID="84a22a6b-1fb5-4959-9342-0bcc4b033b68" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.865822 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.866314 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rsx42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-mnrxb_openstack-operators(72028d3b-7fd0-4b17-b0c2-c92bc7134637): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.867545 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podUID="72028d3b-7fd0-4b17-b0c2-c92bc7134637" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.166365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.174887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.182878 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.310200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podUID="72028d3b-7fd0-4b17-b0c2-c92bc7134637" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.430430 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.430815 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-chvcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-9md4j_openstack-operators(dd52262f-900a-4801-8c4c-f79787b6b715): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.432092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podUID="dd52262f-900a-4801-8c4c-f79787b6b715" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.776101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.793961 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.987920 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.316802 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podUID="dd52262f-900a-4801-8c4c-f79787b6b715" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.803009 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.803721 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-dlskg_openstack-operators(6084260e-35c2-43b5-9606-98e1e0463e98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.805013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podUID="6084260e-35c2-43b5-9606-98e1e0463e98" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.326404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podUID="6084260e-35c2-43b5-9606-98e1e0463e98" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.505901 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.506083 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-ndxcg_openstack-operators(2237138f-4450-415b-9646-c2ab9f88194a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.507310 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podUID="2237138f-4450-415b-9646-c2ab9f88194a" Feb 17 16:14:37 crc kubenswrapper[4829]: E0217 16:14:37.333339 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podUID="2237138f-4450-415b-9646-c2ab9f88194a" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.829806 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.830246 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldsdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-nksk9_openstack-operators(62cfcaa0-5c8a-4a67-95b7-83aa695a8640): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.831435 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podUID="62cfcaa0-5c8a-4a67-95b7-83aa695a8640" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326133 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326240 4829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326446 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qfv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-66fcc5ff49-8lb5d_openstack-operators(584ed73b-c202-4d41-b884-cd9c279b3c0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.328121 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podUID="584ed73b-c202-4d41-b884-cd9c279b3c0d" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.355150 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podUID="62cfcaa0-5c8a-4a67-95b7-83aa695a8640" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.355430 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podUID="584ed73b-c202-4d41-b884-cd9c279b3c0d" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.891711 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.892040 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frqwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fht2z_openstack-operators(eaf75815-7964-4bc0-aeae-d3306764d7f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.894535 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podUID="eaf75815-7964-4bc0-aeae-d3306764d7f4" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.362967 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" event={"ID":"bb32d7a2-68ff-4511-a04f-fa09657791db","Type":"ContainerStarted","Data":"44e302407bf42f169a81f99c7f85f66a40c74db306e94e1e5459b6862f389921"} Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.363376 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:40 crc kubenswrapper[4829]: E0217 16:14:40.367552 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podUID="eaf75815-7964-4bc0-aeae-d3306764d7f4" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.444758 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" podStartSLOduration=3.251040083 podStartE2EDuration="39.444742229s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.006936933 +0000 UTC m=+1155.423954911" lastFinishedPulling="2026-02-17 16:14:39.200639069 +0000 UTC m=+1191.617657057" observedRunningTime="2026-02-17 16:14:40.398161047 +0000 UTC m=+1192.815179025" watchObservedRunningTime="2026-02-17 16:14:40.444742229 +0000 UTC m=+1192.861760207" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.477042 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:40 crc kubenswrapper[4829]: W0217 16:14:40.477394 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ec01cb_62ae_4855_b830_69f896bfb5a4.slice/crio-c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3 WatchSource:0}: Error finding container c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3: Status 404 returned error can't find the container with id c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3 Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.629331 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.733120 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.385797 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" event={"ID":"3aab9223-4e3f-4657-afc2-91d0e0948542","Type":"ContainerStarted","Data":"e6a05a16598fcc712e79333bd8ec370bd28c9c6434cc4bd780516ded76b24202"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.386313 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.398931 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" event={"ID":"5239a5a9-e318-4db3-8394-0427d57d4ae5","Type":"ContainerStarted","Data":"5287f7ab06362448cad1ac5b6179ebfff1bed7065b50eec0570cc90de28093ed"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.399139 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.401992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" event={"ID":"f3add145-231f-4d7b-b9dd-115026b2a05e","Type":"ContainerStarted","Data":"b5e8f8d786bd77c40771ed73d08dda00030fefe31e45537d562efb4a51314225"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.402096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.404669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podStartSLOduration=5.143086624 podStartE2EDuration="40.404658989s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813290652 +0000 UTC m=+1157.230308630" lastFinishedPulling="2026-02-17 16:14:40.074863017 +0000 UTC m=+1192.491880995" observedRunningTime="2026-02-17 16:14:41.404378552 +0000 UTC m=+1193.821396530" watchObservedRunningTime="2026-02-17 16:14:41.404658989 +0000 UTC m=+1193.821676967" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.409289 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" event={"ID":"60ea5425-d352-4d97-bedf-f01d07c89949","Type":"ContainerStarted","Data":"7a1eb64704035c19912673695c845fd607ba6a92e81fbd5aaae355adb31fcbdb"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.409487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.412830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" event={"ID":"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3","Type":"ContainerStarted","Data":"ce342504e318f487bd4bb96fb5e26484b68657d130564c90095d14710ec175b1"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.413022 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.414935 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" event={"ID":"a711806b-ee8c-4fb8-b5da-da5e90ef06c6","Type":"ContainerStarted","Data":"9673f23a882744c4fb3ae306fe1a79929982bc582d496145935cc0d12a9c6ca6"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.415068 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.416498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" event={"ID":"f083cb81-0369-46de-9562-406736ae7e2f","Type":"ContainerStarted","Data":"94125c94c9fa67af553ae8d19e67730d90936476077f3560dc3ba8a25fe9993d"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.417215 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420836 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" event={"ID":"aa745829-0443-47a5-8c10-701bd4645505","Type":"ContainerStarted","Data":"8f9293ea2a4503e3a6a9ce101256db803957c7d61382257a1375ce64d2e3c2e7"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" event={"ID":"aa745829-0443-47a5-8c10-701bd4645505","Type":"ContainerStarted","Data":"6f50a80e19745b9a332663211ea78b8ba7ff6dad4a9d4dee8831d248156b21d7"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420903 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.423053 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" podStartSLOduration=3.875894614 podStartE2EDuration="40.423038074s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.361638533 +0000 UTC m=+1155.778656511" lastFinishedPulling="2026-02-17 16:14:39.908781993 +0000 UTC m=+1192.325799971" observedRunningTime="2026-02-17 16:14:41.420907136 +0000 UTC m=+1193.837925114" watchObservedRunningTime="2026-02-17 16:14:41.423038074 +0000 UTC m=+1193.840056052" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.427204 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" event={"ID":"8642cada-3458-43cc-90aa-cf66a1cd6426","Type":"ContainerStarted","Data":"e15d4312827b1945f9e0486773b8c0b032d6b8d88b139de4027a0c33ae8dc831"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.427526 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.429153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" event={"ID":"0e275e91-4b6e-419e-b076-a6e221f8a8ac","Type":"ContainerStarted","Data":"509b47f2ee0a1479489a30b875afee6ce1de270c0c6c3179e0d0a884b5eb0790"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.432129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" event={"ID":"a1ec01cb-62ae-4855-b830-69f896bfb5a4","Type":"ContainerStarted","Data":"c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.434955 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podStartSLOduration=4.230409141 podStartE2EDuration="39.434942113s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.812768608 +0000 UTC m=+1157.229786586" lastFinishedPulling="2026-02-17 16:14:40.01730158 +0000 UTC m=+1192.434319558" observedRunningTime="2026-02-17 16:14:41.432344003 +0000 UTC m=+1193.849361981" watchObservedRunningTime="2026-02-17 16:14:41.434942113 +0000 UTC m=+1193.851960091" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.456764 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podStartSLOduration=4.399146141 podStartE2EDuration="40.45674574s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.01810966 +0000 UTC m=+1156.435127638" lastFinishedPulling="2026-02-17 16:14:40.075709259 +0000 UTC m=+1192.492727237" observedRunningTime="2026-02-17 16:14:41.456213355 +0000 UTC m=+1193.873231333" watchObservedRunningTime="2026-02-17 16:14:41.45674574 +0000 UTC m=+1193.873763718" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.493090 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podStartSLOduration=4.2196628369999996 podStartE2EDuration="39.493069776s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.730969553 +0000 UTC m=+1157.147987531" lastFinishedPulling="2026-02-17 16:14:40.004376492 +0000 UTC m=+1192.421394470" observedRunningTime="2026-02-17 16:14:41.485926133 +0000 UTC m=+1193.902944111" watchObservedRunningTime="2026-02-17 16:14:41.493069776 +0000 UTC m=+1193.910087754" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.522268 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" podStartSLOduration=39.52224952 podStartE2EDuration="39.52224952s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:41.515849068 +0000 UTC m=+1193.932867046" watchObservedRunningTime="2026-02-17 16:14:41.52224952 +0000 UTC m=+1193.939267498" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.538669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podStartSLOduration=4.236083767 podStartE2EDuration="40.538653251s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.641661261 +0000 UTC m=+1156.058679239" lastFinishedPulling="2026-02-17 16:14:39.944230745 +0000 UTC m=+1192.361248723" observedRunningTime="2026-02-17 16:14:41.536288197 +0000 UTC m=+1193.953306165" watchObservedRunningTime="2026-02-17 16:14:41.538653251 +0000 UTC m=+1193.955671229" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.561605 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podStartSLOduration=4.534594267 podStartE2EDuration="40.561590847s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.005991001 +0000 UTC m=+1156.423008979" lastFinishedPulling="2026-02-17 16:14:40.032987581 +0000 UTC m=+1192.450005559" observedRunningTime="2026-02-17 16:14:41.558194416 +0000 UTC m=+1193.975212394" watchObservedRunningTime="2026-02-17 16:14:41.561590847 +0000 UTC m=+1193.978608825" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.584536 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podStartSLOduration=3.849010892 podStartE2EDuration="40.584516544s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.26999781 +0000 UTC m=+1155.687015788" lastFinishedPulling="2026-02-17 16:14:40.005503462 +0000 UTC m=+1192.422521440" observedRunningTime="2026-02-17 16:14:41.579146309 +0000 UTC m=+1193.996164287" watchObservedRunningTime="2026-02-17 16:14:41.584516544 +0000 UTC m=+1194.001534522" Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.451168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" event={"ID":"958dea67-d633-4f5c-a18e-2aca1a55020c","Type":"ContainerStarted","Data":"88293cbf2f1671c36e7f8c0cbf620ce8258bb20c5f7a0c24a5039de005eaccd4"} Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.452592 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.473048 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podStartSLOduration=4.321775336 podStartE2EDuration="41.473030304s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.716092798 +0000 UTC m=+1157.133110776" lastFinishedPulling="2026-02-17 16:14:41.867347756 +0000 UTC m=+1194.284365744" observedRunningTime="2026-02-17 16:14:43.465186913 +0000 UTC m=+1195.882204891" watchObservedRunningTime="2026-02-17 16:14:43.473030304 +0000 UTC m=+1195.890048282" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.489949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" event={"ID":"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426","Type":"ContainerStarted","Data":"29c1a92d22c4ca1ecaea93dedb5f38ae2baad52a9b245632c80ec34b2e8e599c"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.490615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.491268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" event={"ID":"84a22a6b-1fb5-4959-9342-0bcc4b033b68","Type":"ContainerStarted","Data":"32da7910f9c9c18a966f47442a8fb830ae393db663018d85f7b4d8b379ff45a4"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.491439 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.492668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" event={"ID":"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0","Type":"ContainerStarted","Data":"2165e187fbe350af612738e6419631589c08a52f45771b261a5498605f214f2a"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.492819 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.494417 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" event={"ID":"0e275e91-4b6e-419e-b076-a6e221f8a8ac","Type":"ContainerStarted","Data":"2fd522f7361e535a9b193d19ccbdd8189ba328384534d27c5d01aee1a2c103f7"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.494552 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.495869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" event={"ID":"a1ec01cb-62ae-4855-b830-69f896bfb5a4","Type":"ContainerStarted","Data":"564a220e3437a0e2a4a235820f60a04f246742eb4dfee12a29862a3eb89e72a3"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.496036 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.514382 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podStartSLOduration=4.999781177 podStartE2EDuration="46.514362169s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.855195472 +0000 UTC m=+1157.272213450" lastFinishedPulling="2026-02-17 16:14:46.369776464 +0000 UTC m=+1198.786794442" observedRunningTime="2026-02-17 16:14:47.505995964 +0000 UTC m=+1199.923013952" watchObservedRunningTime="2026-02-17 16:14:47.514362169 +0000 UTC m=+1199.931380157" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.540235 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" podStartSLOduration=40.914486232 podStartE2EDuration="46.540215683s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:40.744102515 +0000 UTC m=+1193.161120493" lastFinishedPulling="2026-02-17 16:14:46.369831966 +0000 UTC m=+1198.786849944" observedRunningTime="2026-02-17 16:14:47.528389446 +0000 UTC m=+1199.945407424" watchObservedRunningTime="2026-02-17 16:14:47.540215683 +0000 UTC m=+1199.957233661" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.557272 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podStartSLOduration=3.8221028759999998 podStartE2EDuration="46.557251372s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.634024122 +0000 UTC m=+1156.051042100" lastFinishedPulling="2026-02-17 16:14:46.369172618 +0000 UTC m=+1198.786190596" observedRunningTime="2026-02-17 16:14:47.548199538 +0000 UTC m=+1199.965217536" watchObservedRunningTime="2026-02-17 16:14:47.557251372 +0000 UTC m=+1199.974269360" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.588778 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" podStartSLOduration=39.709311687 podStartE2EDuration="45.588757828s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:40.490443466 +0000 UTC m=+1192.907461444" lastFinishedPulling="2026-02-17 16:14:46.369889617 +0000 UTC m=+1198.786907585" observedRunningTime="2026-02-17 16:14:47.585160522 +0000 UTC m=+1200.002178510" watchObservedRunningTime="2026-02-17 16:14:47.588757828 +0000 UTC m=+1200.005775816" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.604693 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podStartSLOduration=4.138386159 podStartE2EDuration="45.604676507s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.90549548 +0000 UTC m=+1157.322513458" lastFinishedPulling="2026-02-17 16:14:46.371785828 +0000 UTC m=+1198.788803806" observedRunningTime="2026-02-17 16:14:47.601781728 +0000 UTC m=+1200.018799716" watchObservedRunningTime="2026-02-17 16:14:47.604676507 +0000 UTC m=+1200.021694485" Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.515884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" event={"ID":"72028d3b-7fd0-4b17-b0c2-c92bc7134637","Type":"ContainerStarted","Data":"d82f565b537339cc08b4424bf144ed18cdb420ad45939eeafea50c632a2efd5c"} Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.516593 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.546006 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podStartSLOduration=3.281491231 podStartE2EDuration="47.545981615s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.672233625 +0000 UTC m=+1157.089251603" lastFinishedPulling="2026-02-17 16:14:48.936723999 +0000 UTC m=+1201.353741987" observedRunningTime="2026-02-17 16:14:49.541435083 +0000 UTC m=+1201.958453061" watchObservedRunningTime="2026-02-17 16:14:49.545981615 +0000 UTC m=+1201.962999613" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.527691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" event={"ID":"dd52262f-900a-4801-8c4c-f79787b6b715","Type":"ContainerStarted","Data":"2304bec75914d03abfe30afa3a98c2eeb838b02f618e63413cdf3a424ff7d17c"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.528285 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.530442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" event={"ID":"6084260e-35c2-43b5-9606-98e1e0463e98","Type":"ContainerStarted","Data":"257fd65b29ecb0a135895cb8e372e3088279802b47674eedcc1f9aed9f440f0c"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.531311 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.533931 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" event={"ID":"2237138f-4450-415b-9646-c2ab9f88194a","Type":"ContainerStarted","Data":"df3f044bea487993acecd8a1aaf0b36ba2e6e44739e978590a5f7d79aeff183d"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.583294 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podStartSLOduration=3.4873499 podStartE2EDuration="48.583275096s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813104487 +0000 UTC m=+1157.230122465" lastFinishedPulling="2026-02-17 16:14:49.909029663 +0000 UTC m=+1202.326047661" observedRunningTime="2026-02-17 16:14:50.58191906 +0000 UTC m=+1202.998937078" watchObservedRunningTime="2026-02-17 16:14:50.583275096 +0000 UTC m=+1203.000293094" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.586567 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podStartSLOduration=3.3215519159999998 podStartE2EDuration="49.586554294s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.646647226 +0000 UTC m=+1156.063665204" lastFinishedPulling="2026-02-17 16:14:49.911649604 +0000 UTC m=+1202.328667582" observedRunningTime="2026-02-17 16:14:50.557289317 +0000 UTC m=+1202.974307285" watchObservedRunningTime="2026-02-17 16:14:50.586554294 +0000 UTC m=+1203.003572282" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.608342 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podStartSLOduration=2.593888394 podStartE2EDuration="49.608315329s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:02.8957878 +0000 UTC m=+1155.312805778" lastFinishedPulling="2026-02-17 16:14:49.910214715 +0000 UTC m=+1202.327232713" observedRunningTime="2026-02-17 16:14:50.600298584 +0000 UTC m=+1203.017316562" watchObservedRunningTime="2026-02-17 16:14:50.608315329 +0000 UTC m=+1203.025333327" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.170695 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.229554 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.243225 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.354969 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.430669 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.575605 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.603146 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.615190 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.626252 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.651246 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.733269 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.807226 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:53 crc kubenswrapper[4829]: I0217 16:14:53.011184 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:53 crc kubenswrapper[4829]: I0217 16:14:53.085254 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:54 crc kubenswrapper[4829]: I0217 16:14:54.196921 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:54 crc kubenswrapper[4829]: I0217 16:14:54.995315 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.582813 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" event={"ID":"584ed73b-c202-4d41-b884-cd9c279b3c0d","Type":"ContainerStarted","Data":"a3dfbadaf79b256b9a88b904f4325eb86d9ecc1fa6bf849bd44ee9f840085a1d"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.583298 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.586806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" event={"ID":"62cfcaa0-5c8a-4a67-95b7-83aa695a8640","Type":"ContainerStarted","Data":"fb12d147e287a0d23b1180603855d9346f90298adeb33461cedeaa1c78e5ded9"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.587195 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.595113 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" event={"ID":"eaf75815-7964-4bc0-aeae-d3306764d7f4","Type":"ContainerStarted","Data":"5342185a2f3423e6911215458c7528f4dd254d61e795e2d8462863f544919346"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.598754 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podStartSLOduration=3.526874523 podStartE2EDuration="53.598731894s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813072476 +0000 UTC m=+1157.230090454" lastFinishedPulling="2026-02-17 16:14:54.884929837 +0000 UTC m=+1207.301947825" observedRunningTime="2026-02-17 16:14:55.596884034 +0000 UTC m=+1208.013902002" watchObservedRunningTime="2026-02-17 16:14:55.598731894 +0000 UTC m=+1208.015749882" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.626243 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podStartSLOduration=3.728027552 podStartE2EDuration="54.626221213s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.98869254 +0000 UTC m=+1156.405710518" lastFinishedPulling="2026-02-17 16:14:54.886886161 +0000 UTC m=+1207.303904179" observedRunningTime="2026-02-17 16:14:55.618260899 +0000 UTC m=+1208.035278887" watchObservedRunningTime="2026-02-17 16:14:55.626221213 +0000 UTC m=+1208.043239191" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.638085 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podStartSLOduration=4.336953325 podStartE2EDuration="53.638063531s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.812981313 +0000 UTC m=+1157.229999291" lastFinishedPulling="2026-02-17 16:14:54.114091469 +0000 UTC m=+1206.531109497" observedRunningTime="2026-02-17 16:14:55.630861557 +0000 UTC m=+1208.047879535" watchObservedRunningTime="2026-02-17 16:14:55.638063531 +0000 UTC m=+1208.055081509" Feb 17 16:14:58 crc kubenswrapper[4829]: I0217 16:14:58.348531 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.153062 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.154964 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.160082 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.160111 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.162018 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302295 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302361 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404616 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.406226 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.413320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.426402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.484200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.961346 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: W0217 16:15:00.970603 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88fd8a6_9c2a_4529_81eb_5495aa3237c8.slice/crio-323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459 WatchSource:0}: Error finding container 323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459: Status 404 returned error can't find the container with id 323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459 Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648408 4829 generic.go:334] "Generic (PLEG): container finished" podID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerID="595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c" exitCode=0 Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerDied","Data":"595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c"} Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerStarted","Data":"323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459"} Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.117912 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.312079 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.559288 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.655318 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.711147 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.823926 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.084168 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169441 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169657 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169735 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.170591 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.179723 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.179811 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx" (OuterVolumeSpecName: "kube-api-access-9cfdx") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "kube-api-access-9cfdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271355 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271387 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271396 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.675683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerDied","Data":"323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459"} Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.676002 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.675719 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.601315 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:20 crc kubenswrapper[4829]: E0217 16:15:20.602151 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.602167 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.602395 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.603449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613240 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-prqgw" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613623 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613702 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613779 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613796 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.623997 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.671596 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.672865 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.675321 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.689846 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.715181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.715275 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.716431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.756372 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.816948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.816996 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.817033 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.918773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.919115 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.919808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.920405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.920516 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.934767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.937083 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.992896 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.455996 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.542896 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.758543 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" event={"ID":"8d5f50bb-1dbc-4661-91f3-66c29ea7430e","Type":"ContainerStarted","Data":"e7c4359a6a86de75a2f21197c9258209e81a5ec6d1e0f7b03fc162a1d9d53e77"} Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.761195 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" event={"ID":"ffccb67d-5096-4a51-adf3-4bf3739373ea","Type":"ContainerStarted","Data":"cacd8eed3fb0b0769b53687fb7ee29d23d0b51c36a9b2e50197b211f45b0f9c2"} Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.388186 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.405069 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.406460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.419767 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.582882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.582968 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.583011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684634 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.685430 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.686025 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.711552 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.731085 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.732707 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.743300 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.745188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.783176 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.896733 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.897687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.897767 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:23.999564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:23.999907 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000403 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.026296 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.175648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.351812 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:24 crc kubenswrapper[4829]: W0217 16:15:24.396433 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c13771b_c220_4ce6_9d1c_3c76af499220.slice/crio-ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a WatchSource:0}: Error finding container ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a: Status 404 returned error can't find the container with id ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.558704 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.560612 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.562471 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6sqhz" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.562778 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563358 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563696 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563922 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.564648 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.564763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.583329 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.596314 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.597882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.613939 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.616831 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.638798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.662641 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.693911 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:24 crc kubenswrapper[4829]: W0217 16:15:24.701351 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66112eb6_8e4a_4469_8cfd_825bf6b7563d.slice/crio-8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73 WatchSource:0}: Error finding container 8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73: Status 404 returned error can't find the container with id 8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73 Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716614 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716647 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716669 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716718 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716756 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.718626 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.718720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719216 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719425 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719461 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719528 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719590 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719712 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719885 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720042 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720068 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720654 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720667 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.821971 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822050 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822113 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822126 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822142 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822177 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822223 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822257 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822294 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822310 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822347 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822388 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822422 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822464 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822492 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.823641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.823918 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.824463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825043 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825525 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.826219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.834764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.835357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836170 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.837404 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.839247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840030 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840999 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.842004 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.842057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843818 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843847 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cec88d4327ff12753cbf1d7636d4616ad5b51e6f71f7c68ee07d08bc8a1cc1e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843869 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843895 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2fb41440360b87637c863c905d7642fdbb5fac4b43922d0db49761300e3e982/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844345 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844423 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844441 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b279f517412c9d421e4d384ad7a1032e9021db2370e77c854a0ec0125cf75d39/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844799 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.849300 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.849625 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.852486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.855436 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.858069 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.861701 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.875250 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerStarted","Data":"ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a"} Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.877314 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.880076 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.883663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" event={"ID":"66112eb6-8e4a-4469-8cfd-825bf6b7563d","Type":"ContainerStarted","Data":"8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73"} Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.883748 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891197 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891443 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891393 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9x5xf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891581 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891626 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.894321 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.915038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.917936 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.970295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.014745 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.028933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.028992 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029035 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029099 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130732 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130821 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130846 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130884 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130910 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130934 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130968 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130985 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131547 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132047 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132424 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.133719 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.136346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.136649 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.139336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.139834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.142374 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.142411 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c712c179c4211caeb2d08f251b409f456d9a156c71e8c917f92effa050520833/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.159046 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.191132 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.201931 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.225814 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.242292 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.318696 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.706832 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:25 crc kubenswrapper[4829]: W0217 16:15:25.884927 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee690a85_cf83_4e55_a69d_ca6bd136bf07.slice/crio-a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc WatchSource:0}: Error finding container a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc: Status 404 returned error can't find the container with id a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc Feb 17 16:15:25 crc kubenswrapper[4829]: W0217 16:15:25.886970 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod328bcfe0_93b6_44bb_83ca_2b3a105f1548.slice/crio-bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928 WatchSource:0}: Error finding container bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928: Status 404 returned error can't find the container with id bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928 Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.908351 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.026817 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.028455 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.030447 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ztmt6" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.031817 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.033331 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.033518 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.037614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.039188 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:15:26 crc kubenswrapper[4829]: W0217 16:15:26.085487 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257c3943_bfcb_409b_a915_bacfd95d9c93.slice/crio-c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e WatchSource:0}: Error finding container c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e: Status 404 returned error can't find the container with id c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.086709 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189683 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189824 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189860 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.228668 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: W0217 16:15:26.235425 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18c52f3_efc1_4a9b_a7b0_b19bc419dd4d.slice/crio-aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb WatchSource:0}: Error finding container aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb: Status 404 returned error can't find the container with id aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292600 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293426 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293621 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293711 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.294485 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.299078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304775 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304804 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb65fd8172e557afa0bcf95dbc3a5ab3334f442ae8b5643b4c42d5eeefe12cd5/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.318127 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.351673 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.643287 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.916982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.918661 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.934295 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.936445 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e"} Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.223156 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: W0217 16:15:27.311827 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903a9538_3e9d_4567_a9c2_0eeaaf450b85.slice/crio-99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883 WatchSource:0}: Error finding container 99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883: Status 404 returned error can't find the container with id 99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883 Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.368406 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.371652 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.379780 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.379982 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.380173 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.380945 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9mdf7" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.388563 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.516501 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.517844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.519959 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.520223 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-z9ct4" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.520373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535388 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535498 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535531 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535550 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535596 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535617 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535657 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.541435 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637392 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637433 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637480 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637524 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637584 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637608 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637636 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637653 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637682 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637764 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637791 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.639204 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.639543 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.640078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.643415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.644171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.648672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.649108 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.649141 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7206d36e835ecb5f541b54a5de40bbe7e6392727d9a7c454e3983214fdd1c801/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.656631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.706205 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.712336 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739031 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739835 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.748180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.756795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.757162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.849159 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.985407 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883"} Feb 17 16:15:28 crc kubenswrapper[4829]: I0217 16:15:28.574281 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:28 crc kubenswrapper[4829]: I0217 16:15:28.726806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.079700 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"5a2e8b048098164d9ed25ec98a771c68bee3c41abe41b76c8e5e8b0a15f1ff46"} Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.976765 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.978656 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.981408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zktxq" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.016419 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.100782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.203181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.236223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.322980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.657132 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.658686 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.660356 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-fp6pv" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.660871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.679066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.720861 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.720910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.822879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.822923 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: E0217 16:15:30.823105 4829 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 17 16:15:30 crc kubenswrapper[4829]: E0217 16:15:30.823159 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert podName:54f57142-2ddb-4c2f-a68e-ab77ff965e8c nodeName:}" failed. No retries permitted until 2026-02-17 16:15:31.323140957 +0000 UTC m=+1243.740158935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-vtctx" (UID: "54f57142-2ddb-4c2f-a68e-ab77ff965e8c") : secret "observability-ui-dashboards" not found Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.857626 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.012938 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.019172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.027403 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159334 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159364 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159394 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159818 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.175211 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.177801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.195405 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.195677 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.198624 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.198984 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.199260 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vxmz6" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.202037 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.214307 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.247779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.255632 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.270668 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.271010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.271127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272195 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272611 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272631 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272815 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273123 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273221 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273238 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273643 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273773 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.277466 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.277784 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.279833 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.284842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.292376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.303990 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.381488 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382528 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382556 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382640 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382681 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382812 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.384146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.387708 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.389100 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.389586 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.391918 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.391956 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe3c2171ea8e537d787d3308fa5bc6f869ae05d2809df2c7eb9ceb73db78889d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392032 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392449 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.395494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.397317 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.403272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.449674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.527687 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.580728 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.122967 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.124300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.127025 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6r5bm" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.127217 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.133970 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.141296 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.182716 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.185330 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.219196 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222033 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222341 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222433 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223183 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223234 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223342 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223432 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223463 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223525 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332302 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332476 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332536 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332649 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332693 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333006 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333051 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333309 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.334453 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.334815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335437 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.336202 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.336730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.338946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.350253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.354223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.361601 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.456136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.517847 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.580676 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.590037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.594668 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hdcf5" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612371 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612468 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612613 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.621876 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639742 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639845 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741926 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741973 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742082 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742282 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.743234 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.743500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.744241 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.744261 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a42f6f73351298ff4826167c7f4d711c587190a4cbbca9131e27b0085e9331e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.746974 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.748171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.754420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.761039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.790717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.936271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:36 crc kubenswrapper[4829]: I0217 16:15:36.254401 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4e3198cb-0642-46be-a9e3-33db29446377","Type":"ContainerStarted","Data":"cb71d8e5ea1106b4ed46a413f2381d4a45026e16e4608e4fad10ecfcdbb05242"} Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.163873 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.166039 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168436 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168641 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p52xq" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168732 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.169688 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.186464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225002 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225208 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.327795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.328240 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.328958 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329329 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329452 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329610 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.330695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.331158 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.333145 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.333171 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb22d4f52b89ee248d8eb9b677cd90d33956744283eac5d5ab5898997f58e911/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.335727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.335811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.340224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.343294 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.363810 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.493727 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:45 crc kubenswrapper[4829]: I0217 16:15:45.960829 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.140455 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.141628 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(3949cc3c-e03d-42b7-b07f-dbdce94d7283): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.142960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.401907 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" Feb 17 16:15:50 crc kubenswrapper[4829]: W0217 16:15:50.313096 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod741f1fbb_0699_4bb0_b46e_6eaa47595170.slice/crio-c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3 WatchSource:0}: Error finding container c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3: Status 404 returned error can't find the container with id c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3 Feb 17 16:15:50 crc kubenswrapper[4829]: I0217 16:15:50.410473 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3"} Feb 17 16:15:52 crc kubenswrapper[4829]: I0217 16:15:52.424304 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:52 crc kubenswrapper[4829]: I0217 16:15:52.424682 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.642224 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.642917 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkw5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ftmfx_openstack(66112eb6-8e4a-4469-8cfd-825bf6b7563d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.644194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.030804 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.031013 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9wpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-drgmb_openstack(5c13771b-c220-4ce6-9d1c-3c76af499220): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.032291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.170899 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.171158 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zclf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4zwb8_openstack(8d5f50bb-1dbc-4661-91f3-66c29ea7430e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.172558 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" podUID="8d5f50bb-1dbc-4661-91f3-66c29ea7430e" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.275386 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.275866 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87xml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wffgx_openstack(ffccb67d-5096-4a51-adf3-4bf3739373ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.278757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" podUID="ffccb67d-5096-4a51-adf3-4bf3739373ea" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.492552 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.492897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.500140 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1"} Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.503213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4e3198cb-0642-46be-a9e3-33db29446377","Type":"ContainerStarted","Data":"045eb1b277710dc5c13050ac2f2f64bf44e697379d44f725d130160d951edb94"} Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.503372 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.555626 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.687420054 podStartE2EDuration="32.55560677s" podCreationTimestamp="2026-02-17 16:15:27 +0000 UTC" firstStartedPulling="2026-02-17 16:15:35.350710709 +0000 UTC m=+1247.767728687" lastFinishedPulling="2026-02-17 16:15:58.218897425 +0000 UTC m=+1270.635915403" observedRunningTime="2026-02-17 16:15:59.549025886 +0000 UTC m=+1271.966043864" watchObservedRunningTime="2026-02-17 16:15:59.55560677 +0000 UTC m=+1271.972624758" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.828048 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.111952 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.121865 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.203210 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.211400 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.218776 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.227310 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.266909 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285038 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"ffccb67d-5096-4a51-adf3-4bf3739373ea\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285191 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285276 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"ffccb67d-5096-4a51-adf3-4bf3739373ea\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285489 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config" (OuterVolumeSpecName: "config") pod "ffccb67d-5096-4a51-adf3-4bf3739373ea" (UID: "ffccb67d-5096-4a51-adf3-4bf3739373ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config" (OuterVolumeSpecName: "config") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286630 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286650 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286660 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.292787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf" (OuterVolumeSpecName: "kube-api-access-4zclf") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "kube-api-access-4zclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.312398 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml" (OuterVolumeSpecName: "kube-api-access-87xml") pod "ffccb67d-5096-4a51-adf3-4bf3739373ea" (UID: "ffccb67d-5096-4a51-adf3-4bf3739373ea"). InnerVolumeSpecName "kube-api-access-87xml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.388533 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.388804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.417493 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.513208 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.518287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.521651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" event={"ID":"8d5f50bb-1dbc-4661-91f3-66c29ea7430e","Type":"ContainerDied","Data":"e7c4359a6a86de75a2f21197c9258209e81a5ec6d1e0f7b03fc162a1d9d53e77"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.521746 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.525684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.529172 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d6749f5-rhzrt" event={"ID":"7c076d16-b8e7-4cec-a826-0bfde37276e5","Type":"ContainerStarted","Data":"8fc2b09df95dd5088340580e5716206baf417ca9d4012c72846848e4f2514e5e"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.529220 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d6749f5-rhzrt" event={"ID":"7c076d16-b8e7-4cec-a826-0bfde37276e5","Type":"ContainerStarted","Data":"59bfed50ce8346db033c2aba1138b958c53b8ea108cbd1a9924a20ecc090d6ae"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.531143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" event={"ID":"ffccb67d-5096-4a51-adf3-4bf3739373ea","Type":"ContainerDied","Data":"cacd8eed3fb0b0769b53687fb7ee29d23d0b51c36a9b2e50197b211f45b0f9c2"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.531156 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.532984 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.631742 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86d6749f5-rhzrt" podStartSLOduration=30.631723492 podStartE2EDuration="30.631723492s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:00.624591084 +0000 UTC m=+1273.041609062" watchObservedRunningTime="2026-02-17 16:16:00.631723492 +0000 UTC m=+1273.048741470" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.691638 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.726408 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.768467 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.776007 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.392934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.393300 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.408551 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.553099 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.630476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.901471 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f57142_2ddb_4c2f_a68e_ab77ff965e8c.slice/crio-b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499 WatchSource:0}: Error finding container b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499: Status 404 returned error can't find the container with id b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499 Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.904065 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177c70b9_7b56_48f4_abd1_4d7a9c86450a.slice/crio-7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462 WatchSource:0}: Error finding container 7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462: Status 404 returned error can't find the container with id 7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462 Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.906202 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2003bd16_d251_4004_9eca_9e47fb54e514.slice/crio-f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d WatchSource:0}: Error finding container f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d: Status 404 returned error can't find the container with id f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.907482 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eeefec2_2e41_4278_8c9d_889dbf5f51ea.slice/crio-d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e WatchSource:0}: Error finding container d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e: Status 404 returned error can't find the container with id d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.915002 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5adca8d_ac72_45d0_aa1c_3c453a78620e.slice/crio-4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a WatchSource:0}: Error finding container 4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a: Status 404 returned error can't find the container with id 4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.294481 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5f50bb-1dbc-4661-91f3-66c29ea7430e" path="/var/lib/kubelet/pods/8d5f50bb-1dbc-4661-91f3-66c29ea7430e/volumes" Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.295099 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffccb67d-5096-4a51-adf3-4bf3739373ea" path="/var/lib/kubelet/pods/ffccb67d-5096-4a51-adf3-4bf3739373ea/volumes" Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.593897 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" event={"ID":"54f57142-2ddb-4c2f-a68e-ab77ff965e8c","Type":"ContainerStarted","Data":"b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.608826 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"c7f811cd14f674b453660b6ad7f81e29e6d3b47e489fe39baf0386ce0d424985"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.644164 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.687123 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.695902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.700469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff" event={"ID":"e5adca8d-ac72-45d0-aa1c-3c453a78620e","Type":"ContainerStarted","Data":"4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.701793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerStarted","Data":"f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d"} Feb 17 16:16:03 crc kubenswrapper[4829]: I0217 16:16:03.723862 4829 generic.go:334] "Generic (PLEG): container finished" podID="741f1fbb-0699-4bb0-b46e-6eaa47595170" containerID="a275304b94e13756beec5bc3ea22cea73943689fb08b990e770398e332fc4612" exitCode=0 Feb 17 16:16:03 crc kubenswrapper[4829]: I0217 16:16:03.725052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerDied","Data":"a275304b94e13756beec5bc3ea22cea73943689fb08b990e770398e332fc4612"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"fae7e9ae2e690bc53d5f8669f14902debc99d2cb7767aeb20a9cb98be3ae6c5c"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740770 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"196402ba8f3339d4460e67b2683a659c1cfcc9f89c0daad7ca73a902d4481e49"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740803 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.773513 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kwz7l" podStartSLOduration=19.777784754 podStartE2EDuration="31.773486455s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:15:50.317041411 +0000 UTC m=+1262.734059419" lastFinishedPulling="2026-02-17 16:16:02.312743122 +0000 UTC m=+1274.729761120" observedRunningTime="2026-02-17 16:16:04.764879958 +0000 UTC m=+1277.181897926" watchObservedRunningTime="2026-02-17 16:16:04.773486455 +0000 UTC m=+1277.190504463" Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.770702 4829 generic.go:334] "Generic (PLEG): container finished" podID="903a9538-3e9d-4567-a9c2-0eeaaf450b85" containerID="81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.770806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerDied","Data":"81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1"} Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.850662 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:16:08 crc kubenswrapper[4829]: I0217 16:16:08.798924 4829 generic.go:334] "Generic (PLEG): container finished" podID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" containerID="c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838" exitCode=0 Feb 17 16:16:08 crc kubenswrapper[4829]: I0217 16:16:08.799267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerDied","Data":"c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838"} Feb 17 16:16:09 crc kubenswrapper[4829]: I0217 16:16:09.827034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"cb5edfaa181cc07904d86d2889543a22d081fa9b236fe1a4c668e4099c504a68"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.253591 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.349842 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.351376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.374511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530658 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633669 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633721 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.634635 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.634647 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.668536 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.677040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.851025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" event={"ID":"54f57142-2ddb-4c2f-a68e-ab77ff965e8c","Type":"ContainerStarted","Data":"5377f3d7675e771e2cb33f5f0a44ee7e01cb5f9b6da4c0b82963d668146cbd22"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.856643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"fc9d9c4907c2d4bdac819738d0ec4a90fece0da9858b14ae4075e37451c348a4"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.874498 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" podStartSLOduration=34.21217454 podStartE2EDuration="40.874479466s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.9045366 +0000 UTC m=+1274.321554578" lastFinishedPulling="2026-02-17 16:16:08.566841516 +0000 UTC m=+1280.983859504" observedRunningTime="2026-02-17 16:16:10.872552544 +0000 UTC m=+1283.289570522" watchObservedRunningTime="2026-02-17 16:16:10.874479466 +0000 UTC m=+1283.291497444" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.886035 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"72c8454327a4b5d62205b47d208bfac90bd174e589327b1876678366558bee4e"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.893684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff" event={"ID":"e5adca8d-ac72-45d0-aa1c-3c453a78620e","Type":"ContainerStarted","Data":"849c857f0f4760afddff607fc710b47bec4447f8edd5991d57bc85a528a0c656"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.894445 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-75gff" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.906779 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"3deafe6d5bc9d86f658feddcf39a9e958fd27db1707b6c9b428025af0360eb98"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.921263 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371991.93353 podStartE2EDuration="44.92124423s" podCreationTimestamp="2026-02-17 16:15:26 +0000 UTC" firstStartedPulling="2026-02-17 16:15:28.595597493 +0000 UTC m=+1241.012615471" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:10.917918383 +0000 UTC m=+1283.334936361" watchObservedRunningTime="2026-02-17 16:16:10.92124423 +0000 UTC m=+1283.338262198" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.926914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerStarted","Data":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.927863 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.952341 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.071951726 podStartE2EDuration="46.952323821s" podCreationTimestamp="2026-02-17 16:15:24 +0000 UTC" firstStartedPulling="2026-02-17 16:15:27.327307354 +0000 UTC m=+1239.744325332" lastFinishedPulling="2026-02-17 16:15:58.207679449 +0000 UTC m=+1270.624697427" observedRunningTime="2026-02-17 16:16:10.948965683 +0000 UTC m=+1283.365983661" watchObservedRunningTime="2026-02-17 16:16:10.952323821 +0000 UTC m=+1283.369341799" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.974645 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-75gff" podStartSLOduration=31.599245272 podStartE2EDuration="37.97462873s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.921029616 +0000 UTC m=+1274.338047594" lastFinishedPulling="2026-02-17 16:16:08.296413064 +0000 UTC m=+1280.713431052" observedRunningTime="2026-02-17 16:16:10.968460188 +0000 UTC m=+1283.385478196" watchObservedRunningTime="2026-02-17 16:16:10.97462873 +0000 UTC m=+1283.391646708" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.981103 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.018993 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=34.625514235 podStartE2EDuration="42.018972102s" podCreationTimestamp="2026-02-17 16:15:29 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.908772342 +0000 UTC m=+1274.325790340" lastFinishedPulling="2026-02-17 16:16:09.302230229 +0000 UTC m=+1281.719248207" observedRunningTime="2026-02-17 16:16:10.984211714 +0000 UTC m=+1283.401229682" watchObservedRunningTime="2026-02-17 16:16:11.018972102 +0000 UTC m=+1283.435990080" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056443 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056667 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.064843 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config" (OuterVolumeSpecName: "config") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.069871 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g" (OuterVolumeSpecName: "kube-api-access-tkw5g") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "kube-api-access-tkw5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.072819 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159798 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159845 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159856 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.343619 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.518068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.531181 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.534408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g2mnr" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535480 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535381 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.538689 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.675440 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.675879 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676248 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778238 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778293 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778334 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778349 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778393 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:12.278375999 +0000 UTC m=+1284.695393977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.780541 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.780549 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.781952 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.781988 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8692e9ccbc74af749ec2fa3c25074da78e03b1b6bccd5192b74189beb87f97ff/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.797480 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.808288 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.830131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.934551 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.934584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" event={"ID":"66112eb6-8e4a-4469-8cfd-825bf6b7563d","Type":"ContainerDied","Data":"8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73"} Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.936114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerStarted","Data":"947f6f2b812825423fe5cd557b191cf1f236b7165f1fd81b546d6d944de340be"} Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.993780 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.998183 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.045685 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.047106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.048822 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.052204 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.054034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.068769 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191856 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191901 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191915 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191986 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.192061 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.289842 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" path="/var/lib/kubelet/pods/66112eb6-8e4a-4469-8cfd-825bf6b7563d/volumes" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293685 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293704 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293782 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294349 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294362 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294398 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:13.294385708 +0000 UTC m=+1285.711403686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295985 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.302075 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.313973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.316382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.318936 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.368921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.005916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193"} Feb 17 16:16:13 crc kubenswrapper[4829]: W0217 16:16:13.260125 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b1a5c5_d463_48ba_b0d2_4409299812cb.slice/crio-b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c WatchSource:0}: Error finding container b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c: Status 404 returned error can't find the container with id b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.261077 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.316395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.316803 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.316964 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.317006 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:15.316990027 +0000 UTC m=+1287.734008005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.033702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"7e48d01ba99cffe229b614825d1eba453a4d25596643f17e43caf244c19c0ec8"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.036783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"daf6967fc59cb77ff9be84427251c2c6c0cba5c800832d1c51610616fbf7728e"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.040432 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c889225-ec15-48e6-a170-7b805954d7d6" containerID="91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b" exitCode=0 Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.040491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.043951 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerStarted","Data":"b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.047409 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerID="b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc" exitCode=0 Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.047509 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.057638 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.161681284 podStartE2EDuration="42.057608438s" podCreationTimestamp="2026-02-17 16:15:32 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.92080163 +0000 UTC m=+1274.337819608" lastFinishedPulling="2026-02-17 16:16:12.816728784 +0000 UTC m=+1285.233746762" observedRunningTime="2026-02-17 16:16:14.054952168 +0000 UTC m=+1286.471970156" watchObservedRunningTime="2026-02-17 16:16:14.057608438 +0000 UTC m=+1286.474626416" Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.125611 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.246051272 podStartE2EDuration="38.125590704s" podCreationTimestamp="2026-02-17 16:15:36 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.909262875 +0000 UTC m=+1274.326280853" lastFinishedPulling="2026-02-17 16:16:12.788802307 +0000 UTC m=+1285.205820285" observedRunningTime="2026-02-17 16:16:14.123022426 +0000 UTC m=+1286.540040404" watchObservedRunningTime="2026-02-17 16:16:14.125590704 +0000 UTC m=+1286.542608692" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.060423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerStarted","Data":"69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09"} Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.061154 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.064716 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerStarted","Data":"6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8"} Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.065290 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.097413 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podStartSLOduration=3.6580011470000002 podStartE2EDuration="52.097390212s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:24.400334536 +0000 UTC m=+1236.817352514" lastFinishedPulling="2026-02-17 16:16:12.839723601 +0000 UTC m=+1285.256741579" observedRunningTime="2026-02-17 16:16:15.083513495 +0000 UTC m=+1287.500531493" watchObservedRunningTime="2026-02-17 16:16:15.097390212 +0000 UTC m=+1287.514408200" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.108087 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" podStartSLOduration=3.754979717 podStartE2EDuration="5.108068604s" podCreationTimestamp="2026-02-17 16:16:10 +0000 UTC" firstStartedPulling="2026-02-17 16:16:11.464017127 +0000 UTC m=+1283.881035105" lastFinishedPulling="2026-02-17 16:16:12.817106014 +0000 UTC m=+1285.234123992" observedRunningTime="2026-02-17 16:16:15.104330585 +0000 UTC m=+1287.521348573" watchObservedRunningTime="2026-02-17 16:16:15.108068604 +0000 UTC m=+1287.525086582" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.402330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404142 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404185 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404268 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:19.404236416 +0000 UTC m=+1291.821254434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.937539 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.992664 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.081712 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.130645 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.479313 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.494293 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.494918 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.496559 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.499986 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.524032 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.572796 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.613094 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.614817 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.619872 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.633965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634085 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634197 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634261 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643893 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.735874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.735997 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736132 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736154 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736254 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.770367 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.822871 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838270 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838327 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.839381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.840103 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.840156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.845850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.871778 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.883353 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.910322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.912511 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.922723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.923674 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.939899 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.939983 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940027 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.950422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041740 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042856 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042856 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.063077 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.068005 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.096346 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" containerID="cri-o://6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" gracePeriod=10 Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.096659 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.097703 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" containerID="cri-o://69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" gracePeriod=10 Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.138021 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.235114 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.307128 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.308890 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.311872 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312070 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312184 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312425 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-w5tr8" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.337119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.344187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346107 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346616 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346701 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346788 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448849 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448918 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.449941 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.450290 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.450910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.454259 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.455164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.455755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.464123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.632730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.714108 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.714494 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.962442 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.086423 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.143311 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.144455 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.144585 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="903a9538-3e9d-4567-a9c2-0eeaaf450b85" containerName="galera" probeResult="failure" output=< Feb 17 16:16:18 crc kubenswrapper[4829]: wsrep_local_state_comment (Joined) differs from Synced Feb 17 16:16:18 crc kubenswrapper[4829]: > Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.170212 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c889225-ec15-48e6-a170-7b805954d7d6" containerID="6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.170293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.175871 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerID="69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.176919 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.341060 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.528005 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594503 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594543 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.599712 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw" (OuterVolumeSpecName: "kube-api-access-g9wpw") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "kube-api-access-g9wpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.640829 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.670524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config" (OuterVolumeSpecName: "config") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.673087 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.701896 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702059 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702121 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702796 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702814 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702827 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.705119 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4" (OuterVolumeSpecName: "kube-api-access-qzcx4") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "kube-api-access-qzcx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.747458 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.800889 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config" (OuterVolumeSpecName: "config") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804833 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804861 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804871 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.962341 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:18 crc kubenswrapper[4829]: W0217 16:16:18.969609 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda954ada0_6e54_469b_a010_3da22abd6a61.slice/crio-db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec WatchSource:0}: Error finding container db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec: Status 404 returned error can't find the container with id db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.976029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.993343 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.099481 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:19 crc kubenswrapper[4829]: W0217 16:16:19.110922 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadd70c30_2098_4686_bd7d_f693219a63b8.slice/crio-7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54 WatchSource:0}: Error finding container 7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54: Status 404 returned error can't find the container with id 7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54 Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.200342 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerStarted","Data":"5400a25da3cf9813f2738c87bdee6d972d3e819ee60aec5081f361efad50e947"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"947f6f2b812825423fe5cd557b191cf1f236b7165f1fd81b546d6d944de340be"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206517 4829 scope.go:117] "RemoveContainer" containerID="6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206684 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.219817 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerStarted","Data":"db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.226010 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerStarted","Data":"c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.229335 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2hx8h" event={"ID":"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088","Type":"ContainerStarted","Data":"3a840ae80e771944bfbb62dfe84d04e5d55e6a640be0ff7bec0de168e1adfa6a"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.237584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.246722 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.251598 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.259971 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-84gsz" podStartSLOduration=2.248834879 podStartE2EDuration="7.259958974s" podCreationTimestamp="2026-02-17 16:16:12 +0000 UTC" firstStartedPulling="2026-02-17 16:16:13.264511561 +0000 UTC m=+1285.681529539" lastFinishedPulling="2026-02-17 16:16:18.275635656 +0000 UTC m=+1290.692653634" observedRunningTime="2026-02-17 16:16:19.250002351 +0000 UTC m=+1291.667020329" watchObservedRunningTime="2026-02-17 16:16:19.259958974 +0000 UTC m=+1291.676976952" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.281685 4829 scope.go:117] "RemoveContainer" containerID="91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.292656 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.301425 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.317564 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.337062 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.338220 4829 scope.go:117] "RemoveContainer" containerID="69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.422132 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422362 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422396 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422451 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:27.422432865 +0000 UTC m=+1299.839450843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.454977 4829 scope.go:117] "RemoveContainer" containerID="b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.265606 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2hx8h" event={"ID":"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088","Type":"ContainerStarted","Data":"c24ed5c8ce90d9e70304c01fe433c136dc0088914cf7b57c6ccb091e1bd6358c"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.270031 4829 generic.go:334] "Generic (PLEG): container finished" podID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerID="b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75" exitCode=0 Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.270101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.273514 4829 generic.go:334] "Generic (PLEG): container finished" podID="a954ada0-6e54-469b-a010-3da22abd6a61" containerID="d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531" exitCode=0 Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.275410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.330905 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2hx8h" podStartSLOduration=4.330885259 podStartE2EDuration="4.330885259s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:20.28925425 +0000 UTC m=+1292.706272248" watchObservedRunningTime="2026-02-17 16:16:20.330885259 +0000 UTC m=+1292.747903237" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.354235 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" path="/var/lib/kubelet/pods/5c13771b-c220-4ce6-9d1c-3c76af499220/volumes" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.378982 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" path="/var/lib/kubelet/pods/5c889225-ec15-48e6-a170-7b805954d7d6/volumes" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.401100 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.300468 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerStarted","Data":"4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.300947 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.305138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"91b4f713a9268ff8e01a4b943596ef88edd8ba7c1d7786c169d974b4e2b70fa8"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.310686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerStarted","Data":"90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.310721 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.323329 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" podStartSLOduration=5.323312062 podStartE2EDuration="5.323312062s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:21.316235914 +0000 UTC m=+1293.733253892" watchObservedRunningTime="2026-02-17 16:16:21.323312062 +0000 UTC m=+1293.740330040" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.335949 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-crv29" podStartSLOduration=5.3359341350000005 podStartE2EDuration="5.335934135s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:21.33009207 +0000 UTC m=+1293.747110068" watchObservedRunningTime="2026-02-17 16:16:21.335934135 +0000 UTC m=+1293.752952113" Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.321150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"8bcdeb124d89b2dd03d667081e587ed828cc755f2914df8608da1d4404833615"} Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.348512 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.564341916 podStartE2EDuration="5.348489189s" podCreationTimestamp="2026-02-17 16:16:17 +0000 UTC" firstStartedPulling="2026-02-17 16:16:19.114779649 +0000 UTC m=+1291.531797627" lastFinishedPulling="2026-02-17 16:16:20.898926922 +0000 UTC m=+1293.315944900" observedRunningTime="2026-02-17 16:16:22.346933618 +0000 UTC m=+1294.763951596" watchObservedRunningTime="2026-02-17 16:16:22.348489189 +0000 UTC m=+1294.765507187" Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.425103 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.425156 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:23 crc kubenswrapper[4829]: I0217 16:16:23.328417 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.464661 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465587 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465604 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465622 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465629 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465653 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465661 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465675 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465682 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465937 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465969 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.466822 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.469656 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.483869 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.607174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.607568 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.678724 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-864565556d-824bj" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" containerID="cri-o://76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" gracePeriod=15 Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.709433 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.709651 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.710727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.729795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.762227 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.797698 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.826722 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.338855 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373437 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373488 4829 generic.go:334] "Generic (PLEG): container finished" podID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerID="76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" exitCode=2 Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerDied","Data":"76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07"} Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.409787 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.410064 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-crv29" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" containerID="cri-o://90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" gracePeriod=10 Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.433141 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.459749 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.777772 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.052756 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.054541 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.063620 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.152985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.153061 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.156769 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.168798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.168912 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.176205 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255697 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255896 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255945 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.257381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.267104 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.277272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.357408 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.357484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.358146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.375044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.378402 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.385210 4829 generic.go:334] "Generic (PLEG): container finished" podID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerID="90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.385276 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.386398 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerStarted","Data":"7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.387788 4829 generic.go:334] "Generic (PLEG): container finished" podID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerID="c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.387819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerDied","Data":"c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.488390 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.577496 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:28 crc kubenswrapper[4829]: W0217 16:16:28.643093 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f22317f_8a58_4b93_b29f_a0e585ac48a9.slice/crio-860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56 WatchSource:0}: Error finding container 860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56: Status 404 returned error can't find the container with id 860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56 Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.053759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.070871 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.072217 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.080066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.188394 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.195952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.196846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.219810 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.221941 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.224462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.239334 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.267293 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.269149 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.270407 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.302341 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.303830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.303655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320448 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.377895 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:29 crc kubenswrapper[4829]: E0217 16:16:29.378311 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378624 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378815 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.379475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.386975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.398217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"5400a25da3cf9813f2738c87bdee6d972d3e819ee60aec5081f361efad50e947"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406289 4829 scope.go:117] "RemoveContainer" containerID="90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406414 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414422 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414662 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414906 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.415103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.422856 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerStarted","Data":"083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.426373 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.427626 4829 generic.go:334] "Generic (PLEG): container finished" podID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerID="60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801" exitCode=0 Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.427701 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerDied","Data":"60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.430958 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.431114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerDied","Data":"1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.431251 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.434105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerStarted","Data":"f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.470910 4829 scope.go:117] "RemoveContainer" containerID="b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523328 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523393 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523560 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523616 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523907 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523948 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523984 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524292 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca" (OuterVolumeSpecName: "service-ca") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524306 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524794 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525123 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525183 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525178 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526203 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config" (OuterVolumeSpecName: "console-config") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526878 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.534364 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.538344 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5" (OuterVolumeSpecName: "kube-api-access-sxld5") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "kube-api-access-sxld5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.542735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf" (OuterVolumeSpecName: "kube-api-access-gvfqf") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "kube-api-access-gvfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.545553 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.548390 4829 scope.go:117] "RemoveContainer" containerID="76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.553875 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.554192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.616357 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.622630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626455 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config" (OuterVolumeSpecName: "config") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: W0217 16:16:29.626863 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ed89f1d3-16f2-4e67-82d5-aed34c03792c/volumes/kubernetes.io~configmap/config Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626884 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config" (OuterVolumeSpecName: "config") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626993 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627249 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627260 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627271 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627280 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627290 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627298 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627307 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627316 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627324 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627907 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.650168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.663161 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.727740 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.728897 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.760440 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.761600 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.768076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.788064 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.814226 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.834272 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.032409 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.137869 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138049 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138107 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138167 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138344 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138477 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.139285 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.142609 4829 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.144605 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.146188 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r" (OuterVolumeSpecName: "kube-api-access-mq87r") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "kube-api-access-mq87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.149502 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.179354 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts" (OuterVolumeSpecName: "scripts") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.184111 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.187306 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.190110 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191831 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191859 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191875 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="init" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191882 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="init" Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191908 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192121 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192139 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192948 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.206163 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.237631 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245592 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245652 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245736 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245749 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245762 4829 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245772 4829 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245780 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245788 4829 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.349805 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.350123 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.350959 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.371044 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" path="/var/lib/kubelet/pods/cc453fb9-9d54-4441-bcae-64e34e837dac/volumes" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.371815 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" path="/var/lib/kubelet/pods/ed89f1d3-16f2-4e67-82d5-aed34c03792c/volumes" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.378216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.381781 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.384124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.387808 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.395039 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.486079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.486306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.511852 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerDied","Data":"b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.511893 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.512002 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.513857 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.515616 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerStarted","Data":"f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.528470 4829 generic.go:334] "Generic (PLEG): container finished" podID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerID="e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c" exitCode=0 Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.528589 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerDied","Data":"e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.541253 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerStarted","Data":"97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.545234 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.570711 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8f32-account-create-update-gv4hc" podStartSLOduration=2.57069301 podStartE2EDuration="2.57069301s" podCreationTimestamp="2026-02-17 16:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:30.563221599 +0000 UTC m=+1302.980239577" watchObservedRunningTime="2026-02-17 16:16:30.57069301 +0000 UTC m=+1302.987710988" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.588638 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.588713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.589483 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.606888 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.656821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.667391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.681401 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.734635 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: W0217 16:16:30.877144 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406819b6_b859_4d4d_93ee_43180f5981bf.slice/crio-0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96 WatchSource:0}: Error finding container 0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96: Status 404 returned error can't find the container with id 0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96 Feb 17 16:16:30 crc kubenswrapper[4829]: W0217 16:16:30.883528 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea266eaa_6bce_499f_9891_ca9ec670e465.slice/crio-d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1 WatchSource:0}: Error finding container d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1: Status 404 returned error can't find the container with id d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.333473 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.405765 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.409295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.409509 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.411113 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5973a92c-8e88-4f62-b9ce-5c28e57ced0a" (UID: "5973a92c-8e88-4f62-b9ce-5c28e57ced0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.418169 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx" (OuterVolumeSpecName: "kube-api-access-6k6jx") pod "5973a92c-8e88-4f62-b9ce-5c28e57ced0a" (UID: "5973a92c-8e88-4f62-b9ce-5c28e57ced0a"). InnerVolumeSpecName "kube-api-access-6k6jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.513402 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.513436 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.524759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:31 crc kubenswrapper[4829]: W0217 16:16:31.533638 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03006c3_35b5_45e5_9b9f_578a8eabbf22.slice/crio-da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70 WatchSource:0}: Error finding container da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70: Status 404 returned error can't find the container with id da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.558941 4829 generic.go:334] "Generic (PLEG): container finished" podID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerID="97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.559010 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerDied","Data":"97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.561449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerStarted","Data":"02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.563127 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerStarted","Data":"da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.564716 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerStarted","Data":"78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.568853 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerStarted","Data":"459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.568921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerStarted","Data":"d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.576018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"f8384053ab6137c27b9271267c4cccc647d9e2209f6bb04cce6b1f6a5db93eaa"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.579139 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerStarted","Data":"2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.579178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerStarted","Data":"0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.581870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerDied","Data":"7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.581896 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.582048 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.584243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerStarted","Data":"17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.584274 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerStarted","Data":"7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.602721 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c7bc-account-create-update-zd552" podStartSLOduration=2.602696596 podStartE2EDuration="2.602696596s" podCreationTimestamp="2026-02-17 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:31.593400776 +0000 UTC m=+1304.010418754" watchObservedRunningTime="2026-02-17 16:16:31.602696596 +0000 UTC m=+1304.019714564" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.293524 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.335288 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.335516 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.340275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd" (OuterVolumeSpecName: "kube-api-access-ft5pd") pod "aaa06d20-74dd-41b6-822b-485fdf6cc6d5" (UID: "aaa06d20-74dd-41b6-822b-485fdf6cc6d5"). InnerVolumeSpecName "kube-api-access-ft5pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.340468 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aaa06d20-74dd-41b6-822b-485fdf6cc6d5" (UID: "aaa06d20-74dd-41b6-822b-485fdf6cc6d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.437548 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.437605 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.600658 4829 generic.go:334] "Generic (PLEG): container finished" podID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerID="17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.600976 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerDied","Data":"17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.604228 4829 generic.go:334] "Generic (PLEG): container finished" podID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerID="78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.604330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerDied","Data":"78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.608190 4829 generic.go:334] "Generic (PLEG): container finished" podID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.608217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612354 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612448 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerDied","Data":"f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612528 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.616718 4829 generic.go:334] "Generic (PLEG): container finished" podID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerID="6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.616822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.626244 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.626343 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.631171 4829 generic.go:334] "Generic (PLEG): container finished" podID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerID="b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.631364 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.635469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerStarted","Data":"718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.807891 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f99f-account-create-update-7rvdj" podStartSLOduration=3.807874159 podStartE2EDuration="3.807874159s" podCreationTimestamp="2026-02-17 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:32.803353117 +0000 UTC m=+1305.220371105" watchObservedRunningTime="2026-02-17 16:16:32.807874159 +0000 UTC m=+1305.224892137" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.860886 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" podStartSLOduration=2.860857069 podStartE2EDuration="2.860857069s" podCreationTimestamp="2026-02-17 16:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:32.859954955 +0000 UTC m=+1305.276972933" watchObservedRunningTime="2026-02-17 16:16:32.860857069 +0000 UTC m=+1305.277875047" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.345877 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.465593 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"91c18e73-013c-4a4d-a4cc-922f43fccf45\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.465690 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"91c18e73-013c-4a4d-a4cc-922f43fccf45\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.466753 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91c18e73-013c-4a4d-a4cc-922f43fccf45" (UID: "91c18e73-013c-4a4d-a4cc-922f43fccf45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.469424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b" (OuterVolumeSpecName: "kube-api-access-hf85b") pod "91c18e73-013c-4a4d-a4cc-922f43fccf45" (UID: "91c18e73-013c-4a4d-a4cc-922f43fccf45"). InnerVolumeSpecName "kube-api-access-hf85b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.567961 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.568198 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654198 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerDied","Data":"083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6"} Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654749 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.305970 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.311847 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386635 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386856 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386909 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386939 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387065 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" (UID: "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387383 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387466 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" (UID: "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.391537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567" (OuterVolumeSpecName: "kube-api-access-sb567") pod "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" (UID: "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef"). InnerVolumeSpecName "kube-api-access-sb567". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.392309 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh" (OuterVolumeSpecName: "kube-api-access-cc5hh") pod "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" (UID: "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d"). InnerVolumeSpecName "kube-api-access-cc5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489503 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489515 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666337 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerDied","Data":"f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666387 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666447 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.673611 4829 generic.go:334] "Generic (PLEG): container finished" podID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerID="459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945" exitCode=0 Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.673770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerDied","Data":"459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679682 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679715 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerDied","Data":"7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679755 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.682189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerStarted","Data":"50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26"} Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.022004 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.032940 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.107514 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108306 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.108403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108481 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.108560 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108747 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.109030 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.110014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110131 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.110250 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110342 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110975 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111379 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111515 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111648 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111731 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.113308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.116532 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.121863 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.204942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.205034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.307097 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.307312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.308604 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.325405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.434840 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.692524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"e85a27ef9b0c20e651ae3c51098f9a9be196db23f0c032d53e7793658c1483ab"} Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.697688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7"} Feb 17 16:16:35 crc kubenswrapper[4829]: W0217 16:16:35.932921 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd81de6_80f5_4245_9f19_c86c9ffc125d.slice/crio-4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f WatchSource:0}: Error finding container 4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f: Status 404 returned error can't find the container with id 4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.943086 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.128869 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.229745 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"ea266eaa-6bce-499f-9891-ca9ec670e465\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.229827 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"ea266eaa-6bce-499f-9891-ca9ec670e465\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.230368 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea266eaa-6bce-499f-9891-ca9ec670e465" (UID: "ea266eaa-6bce-499f-9891-ca9ec670e465"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.230843 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.233929 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f" (OuterVolumeSpecName: "kube-api-access-2ls4f") pod "ea266eaa-6bce-499f-9891-ca9ec670e465" (UID: "ea266eaa-6bce-499f-9891-ca9ec670e465"). InnerVolumeSpecName "kube-api-access-2ls4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.291119 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" path="/var/lib/kubelet/pods/5973a92c-8e88-4f62-b9ce-5c28e57ced0a/volumes" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.332712 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerDied","Data":"d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720408 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720490 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.724836 4829 generic.go:334] "Generic (PLEG): container finished" podID="406819b6-b859-4d4d-93ee-43180f5981bf" containerID="2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0" exitCode=0 Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.724911 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerDied","Data":"2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.729248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"49c16e35c06436eeb8c73f4b8b2a68bc23fca33e16bdc7d064897a3e30e301c9"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.739114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerStarted","Data":"8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.739170 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerStarted","Data":"4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.748137 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.748987 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.764393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.765258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.777989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.778459 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.799524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.800209 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.800981 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mxqd7" podStartSLOduration=1.80097006 podStartE2EDuration="1.80097006s" podCreationTimestamp="2026-02-17 16:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:36.777566178 +0000 UTC m=+1309.194584156" watchObservedRunningTime="2026-02-17 16:16:36.80097006 +0000 UTC m=+1309.217988038" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.815251 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.498253677 podStartE2EDuration="1m13.815232375s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:25.903657622 +0000 UTC m=+1238.320675600" lastFinishedPulling="2026-02-17 16:15:58.22063632 +0000 UTC m=+1270.637654298" observedRunningTime="2026-02-17 16:16:36.800227119 +0000 UTC m=+1309.217245097" watchObservedRunningTime="2026-02-17 16:16:36.815232375 +0000 UTC m=+1309.232250353" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.839111 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.907958911 podStartE2EDuration="1m13.839096419s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:26.23827607 +0000 UTC m=+1238.655294048" lastFinishedPulling="2026-02-17 16:15:58.169413578 +0000 UTC m=+1270.586431556" observedRunningTime="2026-02-17 16:16:36.837341262 +0000 UTC m=+1309.254359240" watchObservedRunningTime="2026-02-17 16:16:36.839096419 +0000 UTC m=+1309.256114397" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.863221 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" podStartSLOduration=6.86320714 podStartE2EDuration="6.86320714s" podCreationTimestamp="2026-02-17 16:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:36.855008769 +0000 UTC m=+1309.272026747" watchObservedRunningTime="2026-02-17 16:16:36.86320714 +0000 UTC m=+1309.280225118" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.885859 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=41.764842889 podStartE2EDuration="1m13.885844371s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:26.099339851 +0000 UTC m=+1238.516357829" lastFinishedPulling="2026-02-17 16:15:58.220341343 +0000 UTC m=+1270.637359311" observedRunningTime="2026-02-17 16:16:36.885296027 +0000 UTC m=+1309.302314005" watchObservedRunningTime="2026-02-17 16:16:36.885844371 +0000 UTC m=+1309.302862349" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.932817 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=41.494301532 podStartE2EDuration="1m13.93279998s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:25.889313214 +0000 UTC m=+1238.306331192" lastFinishedPulling="2026-02-17 16:15:58.327811662 +0000 UTC m=+1270.744829640" observedRunningTime="2026-02-17 16:16:36.916083178 +0000 UTC m=+1309.333101166" watchObservedRunningTime="2026-02-17 16:16:36.93279998 +0000 UTC m=+1309.349817958" Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.701223 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.809377 4829 generic.go:334] "Generic (PLEG): container finished" podID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerID="718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1" exitCode=0 Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.809464 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerDied","Data":"718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1"} Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.813195 4829 generic.go:334] "Generic (PLEG): container finished" podID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerID="50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26" exitCode=0 Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.813249 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerDied","Data":"50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26"} Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.815989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"b29fbef8b292c4902f6f086484aeb803f7a4c29f2f87c33b7326d81889554552"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.350912 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.401616 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:38 crc kubenswrapper[4829]: E0217 16:16:38.402002 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402018 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: E0217 16:16:38.402057 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402064 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402232 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402254 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402906 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.406235 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.406373 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xbdvq" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.439803 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.492331 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"406819b6-b859-4d4d-93ee-43180f5981bf\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.492518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"406819b6-b859-4d4d-93ee-43180f5981bf\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493100 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "406819b6-b859-4d4d-93ee-43180f5981bf" (UID: "406819b6-b859-4d4d-93ee-43180f5981bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493397 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493656 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.499223 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6" (OuterVolumeSpecName: "kube-api-access-lvvc6") pod "406819b6-b859-4d4d-93ee-43180f5981bf" (UID: "406819b6-b859-4d4d-93ee-43180f5981bf"). InnerVolumeSpecName "kube-api-access-lvvc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.568129 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.580096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595200 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595220 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595320 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595386 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.599799 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.602667 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.604481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.619772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.745774 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.843210 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.843219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerDied","Data":"0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.844078 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.849863 4829 generic.go:334] "Generic (PLEG): container finished" podID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerID="8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3" exitCode=0 Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.850005 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerDied","Data":"8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.858468 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.859851 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.866085 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.876973 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046669 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046932 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152126 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152198 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152794 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.161396 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.176015 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.247545 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.681915 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:39 crc kubenswrapper[4829]: W0217 16:16:39.842446 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode14bea24_3170_4bdb_8811_9a94d94ae4b7.slice/crio-a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e WatchSource:0}: Error finding container a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e: Status 404 returned error can't find the container with id a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.874462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerDied","Data":"da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.874494 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.875595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerStarted","Data":"a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.876928 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerDied","Data":"02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.876953 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.948234 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.960839 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977649 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977702 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.980296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e50b4954-d1c6-451e-b8f4-3ba817c89c6b" (UID: "e50b4954-d1c6-451e-b8f4-3ba817c89c6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.999437 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e03006c3-35b5-45e5-9b9f-578a8eabbf22" (UID: "e03006c3-35b5-45e5-9b9f-578a8eabbf22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.017180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb" (OuterVolumeSpecName: "kube-api-access-fvbwb") pod "e50b4954-d1c6-451e-b8f4-3ba817c89c6b" (UID: "e50b4954-d1c6-451e-b8f4-3ba817c89c6b"). InnerVolumeSpecName "kube-api-access-fvbwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.019608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x" (OuterVolumeSpecName: "kube-api-access-2ls2x") pod "e03006c3-35b5-45e5-9b9f-578a8eabbf22" (UID: "e03006c3-35b5-45e5-9b9f-578a8eabbf22"). InnerVolumeSpecName "kube-api-access-2ls2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081361 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081410 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081420 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081428 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.887164 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.887178 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.482519 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.510216 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"abd81de6-80f5-4245-9f19-c86c9ffc125d\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.510295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"abd81de6-80f5-4245-9f19-c86c9ffc125d\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.511432 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abd81de6-80f5-4245-9f19-c86c9ffc125d" (UID: "abd81de6-80f5-4245-9f19-c86c9ffc125d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.517486 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs" (OuterVolumeSpecName: "kube-api-access-s4gbs") pod "abd81de6-80f5-4245-9f19-c86c9ffc125d" (UID: "abd81de6-80f5-4245-9f19-c86c9ffc125d"). InnerVolumeSpecName "kube-api-access-s4gbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.613876 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.614270 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.900463 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.908069 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"7b3f944131c6f1201ac98c6a57b8a51ee85f8b9ddc0aec87e7452b12c2dc3229"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerDied","Data":"4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910286 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910344 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.938364 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=32.474811618 podStartE2EDuration="1m11.938341829s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.907759706 +0000 UTC m=+1274.324777694" lastFinishedPulling="2026-02-17 16:16:41.371289927 +0000 UTC m=+1313.788307905" observedRunningTime="2026-02-17 16:16:41.929556932 +0000 UTC m=+1314.346574910" watchObservedRunningTime="2026-02-17 16:16:41.938341829 +0000 UTC m=+1314.355359807" Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.032245 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:42 crc kubenswrapper[4829]: W0217 16:16:42.035746 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9903a8_9361_4b89_a039_72f3e6023014.slice/crio-df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c WatchSource:0}: Error finding container df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c: Status 404 returned error can't find the container with id df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.921627 4829 generic.go:334] "Generic (PLEG): container finished" podID="ec9903a8-9361-4b89-a039-72f3e6023014" containerID="49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053" exitCode=0 Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.921857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerDied","Data":"49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.923192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerStarted","Data":"df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933051 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"3c040a41cebf8d70b8baefb52efbd401563a8a49eb0f8b02d93d0f8560f67fba"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933098 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"f69133749a3667523012a8bb406ae6fee9f85ea5a4fe699e60e9cd1cf1035caf"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"d63cf2af0ef6375cfeb0fd533f0aa7bbe23da758b65075cb5582ea1d7fc82df0"} Feb 17 16:16:43 crc kubenswrapper[4829]: I0217 16:16:43.501516 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-75gff" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.415729 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482652 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482770 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482859 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482984 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483626 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts" (OuterVolumeSpecName: "scripts") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483940 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484038 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484129 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run" (OuterVolumeSpecName: "var-run") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.489919 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n" (OuterVolumeSpecName: "kube-api-access-swn4n") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "kube-api-access-swn4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585872 4829 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585897 4829 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585909 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585919 4829 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585927 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585936 4829 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerDied","Data":"df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c"} Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966805 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966887 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.997075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"1c69bdea01d0eb771aaed33e5c219b1787a9254995581723b8a3193237d120ee"} Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.204771 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.228507 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.244424 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.321620 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.523522 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.531932 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.677769 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678259 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678281 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678305 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678316 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678328 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678336 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678364 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678373 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678631 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678651 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678666 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678696 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.686379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.689451 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.704665 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.704769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.775895 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.777064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.780078 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.797843 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.807072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.807473 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.828690 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.908667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.908779 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.909683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.929462 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.012883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"05cb9a0c6481f759ae84af3cfad13fe4afda3863a81b78de62eaa011eac0f643"} Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.012921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"08554e9a8b8a36a92329c01a8fe5df0b356de6aee76a13d35000a6f089ea7dc8"} Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.021763 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.110017 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.320766 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" path="/var/lib/kubelet/pods/ec9903a8-9361-4b89-a039-72f3e6023014/volumes" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.506895 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.516834 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.530619 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.530666 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.536713 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.616402 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:46 crc kubenswrapper[4829]: W0217 16:16:46.623261 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c492d16_f301_449b_a877_a15a17739865.slice/crio-ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a WatchSource:0}: Error finding container ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a: Status 404 returned error can't find the container with id ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.854901 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.035461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"1a5168715961ab0df7d232692dfee428dafc361cfa022f838b5a790e6e42552d"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.035504 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"b02533177233a5d4b6fb93d36bca1cce5b981822103fe41f3cd562b88816d43e"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.036925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerStarted","Data":"d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.039155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerStarted","Data":"ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.040718 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.053637 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"50bc7039faccad056afde70287bb6da898fd1aa0f5e0a321af578d8b7019bda5"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.054159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"cea8a64498ea6d2002aa5f742146b402b9a523b186eca403bb746cab1b2d5f15"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.055920 4829 generic.go:334] "Generic (PLEG): container finished" podID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerID="17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376" exitCode=0 Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.055980 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerDied","Data":"17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.058070 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c492d16-f301-449b-a877-a15a17739865" containerID="6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d" exitCode=0 Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.058840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerDied","Data":"6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.130152 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.939803041 podStartE2EDuration="38.130133299s" podCreationTimestamp="2026-02-17 16:16:10 +0000 UTC" firstStartedPulling="2026-02-17 16:16:28.888348555 +0000 UTC m=+1301.305366543" lastFinishedPulling="2026-02-17 16:16:44.078678823 +0000 UTC m=+1316.495696801" observedRunningTime="2026-02-17 16:16:48.121566588 +0000 UTC m=+1320.538584576" watchObservedRunningTime="2026-02-17 16:16:48.130133299 +0000 UTC m=+1320.547151277" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.305131 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" path="/var/lib/kubelet/pods/abd81de6-80f5-4245-9f19-c86c9ffc125d/volumes" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.421979 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.423517 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.426797 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.437409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.572803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573186 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573505 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675656 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675781 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675818 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676472 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676839 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.677356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.677461 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.706757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.779530 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.618129 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619694 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" containerID="cri-o://4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" gracePeriod=600 Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619745 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" containerID="cri-o://0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" gracePeriod=600 Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619778 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" containerID="cri-o://acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" gracePeriod=600 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085317 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085676 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085688 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085383 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a"} Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7"} Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f"} Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.527723 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.529940 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.530245 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.532612 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.550354 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.649467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.649521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.751349 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.751426 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.753253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.773696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.858777 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.424990 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.425374 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.425431 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.426397 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.426487 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" gracePeriod=600 Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126769 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" exitCode=0 Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126814 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126869 4829 scope.go:117] "RemoveContainer" containerID="9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.203045 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.227887 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.243841 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.322424 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:16:56 crc kubenswrapper[4829]: I0217 16:16:56.531025 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.202462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerDied","Data":"ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a"} Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.202779 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.236423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerDied","Data":"d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53"} Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.236460 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.254092 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.272769 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363011 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"5c492d16-f301-449b-a877-a15a17739865\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363060 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"5c492d16-f301-449b-a877-a15a17739865\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c492d16-f301-449b-a877-a15a17739865" (UID: "5c492d16-f301-449b-a877-a15a17739865"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.364106 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.364534 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2e81e7f-9610-493c-bdb8-6a7de58b94bf" (UID: "f2e81e7f-9610-493c-bdb8-6a7de58b94bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.372391 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx" (OuterVolumeSpecName: "kube-api-access-r4lwx") pod "5c492d16-f301-449b-a877-a15a17739865" (UID: "5c492d16-f301-449b-a877-a15a17739865"). InnerVolumeSpecName "kube-api-access-r4lwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.373826 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg" (OuterVolumeSpecName: "kube-api-access-j49wg") pod "f2e81e7f-9610-493c-bdb8-6a7de58b94bf" (UID: "f2e81e7f-9610-493c-bdb8-6a7de58b94bf"). InnerVolumeSpecName "kube-api-access-j49wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466068 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466099 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466109 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.500333 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.673626 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.673993 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674032 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674073 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674099 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674136 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674178 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674407 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.675934 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.677790 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.678590 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.679291 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979" (OuterVolumeSpecName: "kube-api-access-bd979") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "kube-api-access-bd979". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.679454 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out" (OuterVolumeSpecName: "config-out") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.685032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config" (OuterVolumeSpecName: "config") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.693149 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.694734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.723741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "pvc-8e635818-7819-4dc1-bb9c-8b7954e16573". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.739015 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config" (OuterVolumeSpecName: "web-config") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.756915 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:57 crc kubenswrapper[4829]: W0217 16:16:57.763533 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf678697_9139_4571_9d3b_9c51ec34df7c.slice/crio-685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97 WatchSource:0}: Error finding container 685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97: Status 404 returned error can't find the container with id 685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97 Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776537 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776583 4829 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776594 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776604 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776640 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") on node \"crc\" " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776652 4829 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776665 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776674 4829 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776682 4829 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776690 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.799567 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.799864 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e635818-7819-4dc1-bb9c-8b7954e16573" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573") on node "crc" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.873227 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:57 crc kubenswrapper[4829]: W0217 16:16:57.873908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod694bd0d8_2bbe_4f9a_945a_dd7132c0645e.slice/crio-5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043 WatchSource:0}: Error finding container 5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043: Status 404 returned error can't find the container with id 5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043 Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.878299 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.249316 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.249363 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252745 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252805 4829 scope.go:117] "RemoveContainer" containerID="0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.251236 4829 generic.go:334] "Generic (PLEG): container finished" podID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerID="56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580" exitCode=0 Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252899 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerStarted","Data":"5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.257132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.262269 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265161 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerStarted","Data":"e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265193 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerStarted","Data":"685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265250 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.312387 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-btrfb" podStartSLOduration=7.312370691 podStartE2EDuration="7.312370691s" podCreationTimestamp="2026-02-17 16:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:58.31088461 +0000 UTC m=+1330.727902588" watchObservedRunningTime="2026-02-17 16:16:58.312370691 +0000 UTC m=+1330.729388669" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.522715 4829 scope.go:117] "RemoveContainer" containerID="acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.577733 4829 scope.go:117] "RemoveContainer" containerID="4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.580581 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.599186 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.607948 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608356 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608373 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608383 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608389 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608408 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608415 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608428 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608435 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608452 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="init-config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608458 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="init-config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608469 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608475 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608744 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608759 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608792 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608801 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.612873 4829 scope.go:117] "RemoveContainer" containerID="7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.613263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616553 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vxmz6" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616595 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616628 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616734 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616750 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616848 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.617257 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.624699 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.636221 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.693615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.696050 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.697653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.697770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698236 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698368 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698460 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698552 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698668 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.800986 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801900 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.802011 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.802192 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803403 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803500 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803783 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803954 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.804169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.804914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.805972 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.806508 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.806550 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe3c2171ea8e537d787d3308fa5bc6f869ae05d2809df2c7eb9ceb73db78889d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808640 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808925 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809375 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809523 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809665 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809708 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.822220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.847754 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.998327 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.273595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerStarted","Data":"50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.276100 4829 generic.go:334] "Generic (PLEG): container finished" podID="df678697-9139-4571-9d3b-9c51ec34df7c" containerID="e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024" exitCode=0 Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.276171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerDied","Data":"e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.281483 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerStarted","Data":"d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.281523 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.302828 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9z4lf" podStartSLOduration=3.670333191 podStartE2EDuration="21.302809634s" podCreationTimestamp="2026-02-17 16:16:38 +0000 UTC" firstStartedPulling="2026-02-17 16:16:39.858304483 +0000 UTC m=+1312.275322461" lastFinishedPulling="2026-02-17 16:16:57.490780936 +0000 UTC m=+1329.907798904" observedRunningTime="2026-02-17 16:16:59.293281997 +0000 UTC m=+1331.710299975" watchObservedRunningTime="2026-02-17 16:16:59.302809634 +0000 UTC m=+1331.719827612" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.339463 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" podStartSLOduration=11.339443483 podStartE2EDuration="11.339443483s" podCreationTimestamp="2026-02-17 16:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:59.327012337 +0000 UTC m=+1331.744030315" watchObservedRunningTime="2026-02-17 16:16:59.339443483 +0000 UTC m=+1331.756461461" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.517335 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:59 crc kubenswrapper[4829]: W0217 16:16:59.522289 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0afff9a0_fd8a_4388_903e_647ae66128db.slice/crio-b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf WatchSource:0}: Error finding container b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf: Status 404 returned error can't find the container with id b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.293808 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" path="/var/lib/kubelet/pods/177c70b9-7b56-48f4-abd1-4d7a9c86450a/volumes" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.295611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf"} Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.726279 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.759530 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"df678697-9139-4571-9d3b-9c51ec34df7c\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.759768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"df678697-9139-4571-9d3b-9c51ec34df7c\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.760325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df678697-9139-4571-9d3b-9c51ec34df7c" (UID: "df678697-9139-4571-9d3b-9c51ec34df7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.760543 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.772951 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9" (OuterVolumeSpecName: "kube-api-access-lxgs9") pod "df678697-9139-4571-9d3b-9c51ec34df7c" (UID: "df678697-9139-4571-9d3b-9c51ec34df7c"). InnerVolumeSpecName "kube-api-access-lxgs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.862354 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.034683 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: E0217 16:17:01.035476 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.035503 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.035827 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.037065 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.039978 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.046914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067559 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067633 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.169854 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.170040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.170066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.175549 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.176136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.206909 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.305602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerDied","Data":"685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97"} Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.305646 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.306748 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.359013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.867453 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: W0217 16:17:01.950410 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4cfa907_6caa_41a9_b86a_371fd960e471.slice/crio-16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c WatchSource:0}: Error finding container 16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c: Status 404 returned error can't find the container with id 16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c Feb 17 16:17:02 crc kubenswrapper[4829]: I0217 16:17:02.317977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerStarted","Data":"16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c"} Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.336684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a"} Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.783228 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.855611 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.856237 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" containerID="cri-o://4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" gracePeriod=10 Feb 17 16:17:04 crc kubenswrapper[4829]: I0217 16:17:04.348387 4829 generic.go:334] "Generic (PLEG): container finished" podID="a954ada0-6e54-469b-a010-3da22abd6a61" containerID="4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" exitCode=0 Feb 17 16:17:04 crc kubenswrapper[4829]: I0217 16:17:04.348532 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89"} Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.204869 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.229720 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.275812 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.400959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec"} Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.401678 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.490096 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606660 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.607000 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.614629 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f" (OuterVolumeSpecName: "kube-api-access-cl46f") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "kube-api-access-cl46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.662665 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.667994 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.672241 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.698392 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config" (OuterVolumeSpecName: "config") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709477 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709511 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709523 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709535 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709545 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.413849 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerStarted","Data":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.416588 4829 generic.go:334] "Generic (PLEG): container finished" podID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerID="50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d" exitCode=0 Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.416662 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.420860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerDied","Data":"50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d"} Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.442038 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.0657374 podStartE2EDuration="6.442019529s" podCreationTimestamp="2026-02-17 16:17:01 +0000 UTC" firstStartedPulling="2026-02-17 16:17:01.952369867 +0000 UTC m=+1334.369387845" lastFinishedPulling="2026-02-17 16:17:06.328651996 +0000 UTC m=+1338.745669974" observedRunningTime="2026-02-17 16:17:07.439593483 +0000 UTC m=+1339.856611461" watchObservedRunningTime="2026-02-17 16:17:07.442019529 +0000 UTC m=+1339.859037497" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.490891 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.516685 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528006 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:07 crc kubenswrapper[4829]: E0217 16:17:07.528646 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528664 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: E0217 16:17:07.528676 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="init" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528682 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="init" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528933 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.530271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.542748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.630199 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.631545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.631626 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.632035 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.635334 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.649779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733469 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.734467 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.757654 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.809323 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.811589 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.821307 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.829640 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.830992 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.835917 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.841729 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.842412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.845727 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.852894 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.876798 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.917755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.939676 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.941288 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947491 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947672 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947778 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947879 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.960351 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.964602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.013255 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.015057 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.026922 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.028501 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033299 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033480 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033927 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.034213 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.040259 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.049995 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.051717 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.051897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052427 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.056404 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057355 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058327 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057425 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058720 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.053760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.059695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.068761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.084167 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.090206 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.092763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.107024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.108881 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.153896 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.157973 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161184 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161886 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161962 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162145 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162190 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162592 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.163739 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.164472 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.171031 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.176556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.182174 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.183871 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.185599 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.193874 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.194933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.199877 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.264615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.264881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.307470 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" path="/var/lib/kubelet/pods/a954ada0-6e54-469b-a010-3da22abd6a61/volumes" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.365892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.366321 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.370692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.407612 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.432760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.436922 4829 generic.go:334] "Generic (PLEG): container finished" podID="0afff9a0-fd8a-4388-903e-647ae66128db" containerID="1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a" exitCode=0 Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.437963 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerDied","Data":"1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a"} Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.445431 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.463442 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.498124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.575540 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.786591 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.903464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.241684 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.288617 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319180 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319302 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.334510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc" (OuterVolumeSpecName: "kube-api-access-njvhc") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "kube-api-access-njvhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.353820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.368036 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.394418 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.436963 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.441443 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.441468 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.443141 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.450231 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerStarted","Data":"026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.451559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerStarted","Data":"4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.458135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"ca7725561433222ef92fd7ad0ec590cf20bab7b196d6e1f6e9339f9b216776bd"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.461494 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data" (OuterVolumeSpecName: "config-data") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.462944 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerStarted","Data":"414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.462978 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerStarted","Data":"30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.467623 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerStarted","Data":"e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.467655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerStarted","Data":"0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.475822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerStarted","Data":"2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.475870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerStarted","Data":"872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerDied","Data":"a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485933 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485995 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.506187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerStarted","Data":"3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.516443 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2cec-account-create-update-hfc78" podStartSLOduration=2.516348121 podStartE2EDuration="2.516348121s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.488960841 +0000 UTC m=+1341.905978819" watchObservedRunningTime="2026-02-17 16:17:09.516348121 +0000 UTC m=+1341.933366099" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.545180 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.561356 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wlnfn" podStartSLOduration=2.561338645 podStartE2EDuration="2.561338645s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.512770114 +0000 UTC m=+1341.929788092" watchObservedRunningTime="2026-02-17 16:17:09.561338645 +0000 UTC m=+1341.978356623" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.572841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-sgsbf" podStartSLOduration=2.572823375 podStartE2EDuration="2.572823375s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.527673006 +0000 UTC m=+1341.944690984" watchObservedRunningTime="2026-02-17 16:17:09.572823375 +0000 UTC m=+1341.989841353" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.756092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:09 crc kubenswrapper[4829]: W0217 16:17:09.758972 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fb73f59_cddf_4630_b754_264ec2ccee1e.slice/crio-6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6 WatchSource:0}: Error finding container 6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6: Status 404 returned error can't find the container with id 6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6 Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.770168 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.803099 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.901453 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:09 crc kubenswrapper[4829]: E0217 16:17:09.901968 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.901986 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.902919 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.904119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.925861 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960028 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960099 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960164 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960363 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.066561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.067757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.068369 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.069191 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.070792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.070965 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071960 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.072451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.073002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.090595 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.245838 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.524399 4829 generic.go:334] "Generic (PLEG): container finished" podID="45907bce-01ca-47e8-bfef-12ae037bb254" containerID="61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.524759 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerDied","Data":"61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.561567 4829 generic.go:334] "Generic (PLEG): container finished" podID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerID="4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.563526 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerDied","Data":"4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.575142 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerStarted","Data":"1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.575185 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerStarted","Data":"6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.581819 4829 generic.go:334] "Generic (PLEG): container finished" podID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerID="414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.581949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerDied","Data":"414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.591464 4829 generic.go:334] "Generic (PLEG): container finished" podID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerID="e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.591529 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerDied","Data":"e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606687 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerID="0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606747 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerDied","Data":"0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606768 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerStarted","Data":"a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.629409 4829 generic.go:334] "Generic (PLEG): container finished" podID="64394b7b-175f-4429-b284-783394b5362b" containerID="a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.630365 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerDied","Data":"a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.661428 4829 generic.go:334] "Generic (PLEG): container finished" podID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerID="2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.661502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerDied","Data":"2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.664948 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerStarted","Data":"0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.803884 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.685013 4829 generic.go:334] "Generic (PLEG): container finished" podID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerID="1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f" exitCode=0 Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.685327 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerDied","Data":"1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f"} Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.706779 4829 generic.go:334] "Generic (PLEG): container finished" podID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerID="06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd" exitCode=0 Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.707736 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd"} Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.707763 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerStarted","Data":"4f1a71803b633d03391de17f6f16604c5e107eae12d0b26db71e47dca08add20"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.290642 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.436187 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.436484 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.438293 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" (UID: "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.446632 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj" (OuterVolumeSpecName: "kube-api-access-g7xwj") pod "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" (UID: "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4"). InnerVolumeSpecName "kube-api-access-g7xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.540551 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.540605 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.571367 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.574895 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.610833 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.616689 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.628528 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.638863 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641771 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"f7208dff-6f9e-410a-9b88-e6def8b38478\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641844 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"a1857247-1b55-4f04-91b5-2725347ddd5e\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641927 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"a1857247-1b55-4f04-91b5-2725347ddd5e\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641956 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"f7208dff-6f9e-410a-9b88-e6def8b38478\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642190 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7208dff-6f9e-410a-9b88-e6def8b38478" (UID: "f7208dff-6f9e-410a-9b88-e6def8b38478"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642523 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1857247-1b55-4f04-91b5-2725347ddd5e" (UID: "a1857247-1b55-4f04-91b5-2725347ddd5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642937 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642951 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.643864 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.650186 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8" (OuterVolumeSpecName: "kube-api-access-g77j8") pod "f7208dff-6f9e-410a-9b88-e6def8b38478" (UID: "f7208dff-6f9e-410a-9b88-e6def8b38478"). InnerVolumeSpecName "kube-api-access-g77j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.657898 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk" (OuterVolumeSpecName: "kube-api-access-2blsk") pod "a1857247-1b55-4f04-91b5-2725347ddd5e" (UID: "a1857247-1b55-4f04-91b5-2725347ddd5e"). InnerVolumeSpecName "kube-api-access-2blsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.734374 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerDied","Data":"0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.735238 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.734387 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741351 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerDied","Data":"872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741389 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741432 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.743873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"84ad18d3-95f7-43e4-b906-65466cf9b14f\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.743926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"45907bce-01ca-47e8-bfef-12ae037bb254\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744034 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"45907bce-01ca-47e8-bfef-12ae037bb254\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744060 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"64394b7b-175f-4429-b284-783394b5362b\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744089 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"64394b7b-175f-4429-b284-783394b5362b\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744109 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"5fb73f59-cddf-4630-b754-264ec2ccee1e\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744147 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"84ad18d3-95f7-43e4-b906-65466cf9b14f\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744245 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744265 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"5fb73f59-cddf-4630-b754-264ec2ccee1e\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744733 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744757 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerDied","Data":"3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744839 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744877 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64394b7b-175f-4429-b284-783394b5362b" (UID: "64394b7b-175f-4429-b284-783394b5362b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745182 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fb73f59-cddf-4630-b754-264ec2ccee1e" (UID: "5fb73f59-cddf-4630-b754-264ec2ccee1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745589 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84ad18d3-95f7-43e4-b906-65466cf9b14f" (UID: "84ad18d3-95f7-43e4-b906-65466cf9b14f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745923 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "964c7b6b-c551-489a-9a5b-7fbe31c855b2" (UID: "964c7b6b-c551-489a-9a5b-7fbe31c855b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.746806 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45907bce-01ca-47e8-bfef-12ae037bb254" (UID: "45907bce-01ca-47e8-bfef-12ae037bb254"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.749167 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx" (OuterVolumeSpecName: "kube-api-access-jmqqx") pod "964c7b6b-c551-489a-9a5b-7fbe31c855b2" (UID: "964c7b6b-c551-489a-9a5b-7fbe31c855b2"). InnerVolumeSpecName "kube-api-access-jmqqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.750425 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85" (OuterVolumeSpecName: "kube-api-access-vww85") pod "5fb73f59-cddf-4630-b754-264ec2ccee1e" (UID: "5fb73f59-cddf-4630-b754-264ec2ccee1e"). InnerVolumeSpecName "kube-api-access-vww85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5" (OuterVolumeSpecName: "kube-api-access-f5mc5") pod "45907bce-01ca-47e8-bfef-12ae037bb254" (UID: "45907bce-01ca-47e8-bfef-12ae037bb254"). InnerVolumeSpecName "kube-api-access-f5mc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerDied","Data":"6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751762 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751708 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.753096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7" (OuterVolumeSpecName: "kube-api-access-drcd7") pod "64394b7b-175f-4429-b284-783394b5362b" (UID: "64394b7b-175f-4429-b284-783394b5362b"). InnerVolumeSpecName "kube-api-access-drcd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.754314 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"318f8d43e12a3179e894e2996e37bee062931a3036d8b7a57c8e1d5e759380f1"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.757508 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t" (OuterVolumeSpecName: "kube-api-access-kpj6t") pod "84ad18d3-95f7-43e4-b906-65466cf9b14f" (UID: "84ad18d3-95f7-43e4-b906-65466cf9b14f"). InnerVolumeSpecName "kube-api-access-kpj6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758017 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerDied","Data":"30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758050 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758057 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.764797 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerStarted","Data":"111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770004 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770060 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerDied","Data":"a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770091 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerDied","Data":"026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777146 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777190 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.786841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podStartSLOduration=3.78682607 podStartE2EDuration="3.78682607s" podCreationTimestamp="2026-02-17 16:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.786003388 +0000 UTC m=+1345.203021366" watchObservedRunningTime="2026-02-17 16:17:12.78682607 +0000 UTC m=+1345.203844048" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788882 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerDied","Data":"4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788924 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788989 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848183 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848543 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848900 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848938 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848951 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848965 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848981 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849008 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849021 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849034 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:13 crc kubenswrapper[4829]: I0217 16:17:13.805693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:19 crc kubenswrapper[4829]: I0217 16:17:19.877177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"2dcdfce0630d694970e5143b2118a6e5bf6a933de71be67ae3cce25ba6df4523"} Feb 17 16:17:19 crc kubenswrapper[4829]: I0217 16:17:19.920421 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.920405151 podStartE2EDuration="21.920405151s" podCreationTimestamp="2026-02-17 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:19.916829685 +0000 UTC m=+1352.333847673" watchObservedRunningTime="2026-02-17 16:17:19.920405151 +0000 UTC m=+1352.337423129" Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.247841 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.365544 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.365876 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" containerID="cri-o://d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" gracePeriod=10 Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.891738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerStarted","Data":"0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237"} Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.896149 4829 generic.go:334] "Generic (PLEG): container finished" podID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerID="d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" exitCode=0 Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.896520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2"} Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.909294 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cs5v7" podStartSLOduration=3.295865659 podStartE2EDuration="13.909276963s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="2026-02-17 16:17:09.877191734 +0000 UTC m=+1342.294209702" lastFinishedPulling="2026-02-17 16:17:20.490603028 +0000 UTC m=+1352.907621006" observedRunningTime="2026-02-17 16:17:20.908399979 +0000 UTC m=+1353.325417977" watchObservedRunningTime="2026-02-17 16:17:20.909276963 +0000 UTC m=+1353.326294931" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.089794 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.250944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.250992 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251124 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251290 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.265495 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4" (OuterVolumeSpecName: "kube-api-access-8drp4") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "kube-api-access-8drp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.301362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.302099 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.305511 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.308156 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.309114 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config" (OuterVolumeSpecName: "config") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354561 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354605 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354616 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354626 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354635 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354644 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.913906 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.914822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043"} Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.914944 4829 scope.go:117] "RemoveContainer" containerID="d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.961125 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.969005 4829 scope.go:117] "RemoveContainer" containerID="56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.973452 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:22 crc kubenswrapper[4829]: I0217 16:17:22.303436 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" path="/var/lib/kubelet/pods/694bd0d8-2bbe-4f9a-945a-dd7132c0645e/volumes" Feb 17 16:17:24 crc kubenswrapper[4829]: I0217 16:17:24.000503 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:26 crc kubenswrapper[4829]: I0217 16:17:26.975975 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerID="0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237" exitCode=0 Feb 17 16:17:26 crc kubenswrapper[4829]: I0217 16:17:26.976430 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerDied","Data":"0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237"} Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.434680 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.530710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.531497 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.531647 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.539071 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf2e81e7f-9610-493c-bdb8-6a7de58b94bf"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2e81e7f_9610_493c_bdb8_6a7de58b94bf.slice" Feb 17 16:17:28 crc kubenswrapper[4829]: E0217 16:17:28.539142 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2e81e7f_9610_493c_bdb8_6a7de58b94bf.slice" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.550918 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr" (OuterVolumeSpecName: "kube-api-access-pp9qr") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "kube-api-access-pp9qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.571826 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.607780 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data" (OuterVolumeSpecName: "config-data") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634278 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634322 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634336 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.998974 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerDied","Data":"0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847"} Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999502 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999130 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.000355 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.009242 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.301584 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302764 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302781 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302799 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302813 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302826 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302834 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302842 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302847 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302862 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302868 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302916 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="init" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302923 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="init" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302930 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302936 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302949 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302954 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302964 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302970 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302985 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302991 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.303004 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303009 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303282 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303298 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303305 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303311 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303323 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303335 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303344 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303353 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303360 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303372 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.307638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.328622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.357686 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.358979 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.363271 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.368138 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.371778 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.372352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.378795 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.382807 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459352 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459471 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459666 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459768 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459791 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460175 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.463636 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.465758 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.474601 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nfxjw" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.474806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.488119 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.548786 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.550750 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.561693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.561882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.562051 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8kvfc" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563969 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563984 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564003 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564033 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564048 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564134 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568004 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568248 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568872 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.570991 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.572418 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.573147 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.576901 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.577514 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.585365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.593066 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.616836 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.619329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.631971 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.652893 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666928 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666972 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666993 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667046 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667131 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.708523 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.724039 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.725694 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.732380 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-68q4f" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.733721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.767121 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770323 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770481 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770496 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.775157 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.795939 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.801504 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.804494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.805177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.805356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.809370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.810686 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.813283 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.813376 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.814744 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.833261 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p9cb5" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.833494 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.834109 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.834801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.854311 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.872844 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.872920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.873071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.920337 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.963032 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.965372 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977457 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977477 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977596 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977647 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.981760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:29.998091 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.048510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.080991 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081041 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081077 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081162 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081206 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081271 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081366 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081875 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.083471 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.105883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.111271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.116011 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.127234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.135668 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145145 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145227 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfff2" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145544 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.157172 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.160253 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.160978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.168684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.217978 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218037 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218161 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218193 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218270 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218355 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.222965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.224876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.225820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.227415 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.227958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.228414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.229149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.243302 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.246071 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.258506 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.258662 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.269622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.299933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.323947 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.323988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324223 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324239 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.335181 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.335339 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.347384 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.350377 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.430090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431295 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431336 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431449 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.437202 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.437483 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.444732 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.448088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.449743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.453317 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.455863 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.457721 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459135 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xbdvq" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.460660 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.462344 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.465462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.503180 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.534935 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535163 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535192 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535213 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.548730 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.555874 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.558369 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.558592 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.567452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.623689 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.629994 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636447 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636502 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636526 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636542 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636599 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636675 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636748 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636778 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636804 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636822 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636902 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.637186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.637266 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.640303 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.642253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.643391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.643414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.658843 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.665319 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.665348 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738508 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738691 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738730 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738857 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.739697 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.739758 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.741055 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744134 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744534 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744554 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.745750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.749702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.756211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.777178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.782745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.825139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.877095 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.899712 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:30 crc kubenswrapper[4829]: W0217 16:17:30.965224 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d3ed60_8c68_44ec_aaa1_806b5aec5df1.slice/crio-0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958 WatchSource:0}: Error finding container 0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958: Status 404 returned error can't find the container with id 0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.123738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerStarted","Data":"d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.125413 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerStarted","Data":"0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.127925 4829 generic.go:334] "Generic (PLEG): container finished" podID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerID="7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933" exitCode=0 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.129112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerDied","Data":"7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.129144 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerStarted","Data":"72a8b4daac2d9d070607a45eeb2b33af1441c752a45b30e8f19c0d738ce701e3"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.324249 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:31 crc kubenswrapper[4829]: W0217 16:17:31.326693 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff4740d_5b36_4273_be02_50bec771e157.slice/crio-d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365 WatchSource:0}: Error finding container d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365: Status 404 returned error can't find the container with id d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.333830 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.369158 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.401865 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.755023 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.801661 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.837880 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.869659 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.909645 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.082625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.082960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083137 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083245 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083350 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.109861 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.128961 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x" (OuterVolumeSpecName: "kube-api-access-wlx4x") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "kube-api-access-wlx4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: W0217 16:17:32.132617 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb920f32_c8e7_45d7_8c19_40ae485d7c2f.slice/crio-f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6 WatchSource:0}: Error finding container f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6: Status 404 returned error can't find the container with id f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6 Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.139642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.152576 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.157603 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerStarted","Data":"d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.160265 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.160642 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerStarted","Data":"c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.163061 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166684 4829 generic.go:334] "Generic (PLEG): container finished" podID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerID="1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06" exitCode=0 Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166775 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerStarted","Data":"de029d86f193dd1c04a644dfbce66d4d5a98f68124c1549de6eaa99d3eb1caa6"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166891 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config" (OuterVolumeSpecName: "config") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.168243 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.176610 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.176617 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerDied","Data":"72a8b4daac2d9d070607a45eeb2b33af1441c752a45b30e8f19c0d738ce701e3"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.177335 4829 scope.go:117] "RemoveContainer" containerID="7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193539 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193579 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193593 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193626 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193638 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193650 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerStarted","Data":"8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.203429 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerStarted","Data":"add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.209530 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.214434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerStarted","Data":"1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.214688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerStarted","Data":"7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.242818 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7l7pb" podStartSLOduration=3.242800919 podStartE2EDuration="3.242800919s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:32.222380587 +0000 UTC m=+1364.639398565" watchObservedRunningTime="2026-02-17 16:17:32.242800919 +0000 UTC m=+1364.659818897" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.267788 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jrh5n" podStartSLOduration=3.267761773 podStartE2EDuration="3.267761773s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:32.237038013 +0000 UTC m=+1364.654056001" watchObservedRunningTime="2026-02-17 16:17:32.267761773 +0000 UTC m=+1364.684779751" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.337736 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.354109 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.837323 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.263042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerStarted","Data":"4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.265051 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.286329 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podStartSLOduration=4.286309645 podStartE2EDuration="4.286309645s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:33.285789121 +0000 UTC m=+1365.702807119" watchObservedRunningTime="2026-02-17 16:17:33.286309645 +0000 UTC m=+1365.703327623" Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.289884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.293857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.293903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6"} Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.298504 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" path="/var/lib/kubelet/pods/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db/volumes" Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.315073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f"} Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.317105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8"} Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329564 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736"} Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329926 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" containerID="cri-o://435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.330008 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" containerID="cri-o://6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329805 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" containerID="cri-o://5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329608 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" containerID="cri-o://ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.367171 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.367153281 podStartE2EDuration="6.367153281s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:35.349122894 +0000 UTC m=+1367.766140872" watchObservedRunningTime="2026-02-17 16:17:35.367153281 +0000 UTC m=+1367.784171259" Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.390826 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.3908102190000005 podStartE2EDuration="6.390810219s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:35.379012291 +0000 UTC m=+1367.796030269" watchObservedRunningTime="2026-02-17 16:17:35.390810219 +0000 UTC m=+1367.807828197" Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.345366 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerID="add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491" exitCode=0 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.345454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerDied","Data":"add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349033 4829 generic.go:334] "Generic (PLEG): container finished" podID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerID="6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" exitCode=0 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349069 4829 generic.go:334] "Generic (PLEG): container finished" podID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerID="435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349119 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351443 4829 generic.go:334] "Generic (PLEG): container finished" podID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerID="5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351463 4829 generic.go:334] "Generic (PLEG): container finished" podID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerID="ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351478 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8"} Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.348778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.442689 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.442976 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" containerID="cri-o://111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" gracePeriod=10 Feb 17 16:17:41 crc kubenswrapper[4829]: I0217 16:17:41.421361 4829 generic.go:334] "Generic (PLEG): container finished" podID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerID="111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" exitCode=0 Feb 17 16:17:41 crc kubenswrapper[4829]: I0217 16:17:41.421410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2"} Feb 17 16:17:45 crc kubenswrapper[4829]: I0217 16:17:45.247467 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.831498 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.832519 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-8s649_openstack(8ff4740d-5b36-4273-be02-50bec771e157): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.834559 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-8s649" podUID="8ff4740d-5b36-4273-be02-50bec771e157" Feb 17 16:17:48 crc kubenswrapper[4829]: E0217 16:17:48.515804 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-8s649" podUID="8ff4740d-5b36-4273-be02-50bec771e157" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.247022 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.568094 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571"} Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.568691 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.569061 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.578132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerDied","Data":"d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf"} Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.578177 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.581494 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.667959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669294 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669322 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669408 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669425 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669529 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670284 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs" (OuterVolumeSpecName: "logs") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670668 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670773 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670807 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.671901 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.677360 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.681624 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.681783 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq" (OuterVolumeSpecName: "kube-api-access-pwxrq") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "kube-api-access-pwxrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.690653 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts" (OuterVolumeSpecName: "scripts") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.690791 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.705984 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts" (OuterVolumeSpecName: "scripts") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.709008 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l" (OuterVolumeSpecName: "kube-api-access-pd45l") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "kube-api-access-pd45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.716729 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.719291 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (OuterVolumeSpecName: "glance") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "pvc-60154460-e4e5-447b-9d26-02e14a9d8490". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.728995 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data" (OuterVolumeSpecName: "config-data") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.769001 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data" (OuterVolumeSpecName: "config-data") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773842 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773870 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773880 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773889 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773918 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773929 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773938 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773945 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773954 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773961 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773970 4829 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.774194 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.783337 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.824406 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.824673 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490") on node "crc" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876371 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876788 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876804 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.586418 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.586451 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.646967 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.657047 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.677317 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.697596 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698331 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698366 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698432 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698452 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698471 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698488 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698508 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698519 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698881 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698945 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698960 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698977 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.705377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.707807 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.708472 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.716165 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.728412 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.770401 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.771988 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.776930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.776968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777450 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777910 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.784593 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.816882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.816996 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817136 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817163 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817194 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817469 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.920487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921348 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921373 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921417 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921437 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921466 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.922697 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.922996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.926278 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.926323 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.930752 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.933620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.940473 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.943743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.944630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.993507 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.023884 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024035 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024052 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024215 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.030695 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.172265 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.173337 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.173893 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.174245 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.175964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.178080 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.313554 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" path="/var/lib/kubelet/pods/1f87ae24-e966-4385-8a84-cb66b14cd28b/volumes" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.341296 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" path="/var/lib/kubelet/pods/3a50b549-2eb5-4bfa-8f1d-3b862974ceed/volumes" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.399331 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.519007 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600449 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600487 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600524 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600728 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600922 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601274 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs" (OuterVolumeSpecName: "logs") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601513 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601525 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.608257 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb" (OuterVolumeSpecName: "kube-api-access-t29jb") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "kube-api-access-t29jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.621732 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts" (OuterVolumeSpecName: "scripts") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.635397 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (OuterVolumeSpecName: "glance") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.639583 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.662803 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.667865 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data" (OuterVolumeSpecName: "config-data") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6"} Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692652 4829 scope.go:117] "RemoveContainer" containerID="6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692799 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704201 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704241 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704263 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704283 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704299 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704356 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.736547 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.736752 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537") on node "crc" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.776947 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.796495 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.808903 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.826944 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: E0217 16:17:59.827484 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827499 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: E0217 16:17:59.827513 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827518 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827738 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827751 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.828863 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.831804 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.831901 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.850007 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910654 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910854 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910938 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013219 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013452 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013476 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013726 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013888 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.016079 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.016339 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.018773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.019843 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.020630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.029347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.032021 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.063109 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.167125 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.246977 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.247641 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.298195 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" path="/var/lib/kubelet/pods/bb920f32-c8e7-45d7-8c19-40ae485d7c2f/volumes" Feb 17 16:18:03 crc kubenswrapper[4829]: I0217 16:18:03.758674 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.813081 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.813408 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lrq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xh926_openstack(7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.815352 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xh926" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" Feb 17 16:18:03 crc kubenswrapper[4829]: I0217 16:18:03.914226 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014434 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014910 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015010 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015109 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.024191 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k" (OuterVolumeSpecName: "kube-api-access-xwc8k") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "kube-api-access-xwc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.081324 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.083886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config" (OuterVolumeSpecName: "config") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.086670 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.107298 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117533 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: W0217 16:18:04.117683 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9b4eb784-8c4c-4875-ae8f-e8882eb9989f/volumes/kubernetes.io~configmap/dns-svc Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117716 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118170 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118192 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118201 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118211 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118222 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118230 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.760340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"4f1a71803b633d03391de17f6f16604c5e107eae12d0b26db71e47dca08add20"} Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.760363 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:04 crc kubenswrapper[4829]: E0217 16:18:04.763502 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xh926" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.804217 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.817984 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.116556 4829 scope.go:117] "RemoveContainer" containerID="435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.136463 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.136692 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js29x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n46p8_openstack(f3d9b56f-3f6b-4fb6-af65-8f2410f60e20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.138382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n46p8" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.248034 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.256493 4829 scope.go:117] "RemoveContainer" containerID="111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.326030 4829 scope.go:117] "RemoveContainer" containerID="06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.627478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.762511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:18:05 crc kubenswrapper[4829]: W0217 16:18:05.771413 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f146bc_ed08_462a_9c4a_f5641b460469.slice/crio-c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1 WatchSource:0}: Error finding container c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1: Status 404 returned error can't find the container with id c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1 Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.774272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90"} Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.780511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerStarted","Data":"7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96"} Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.785203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerStarted","Data":"b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f"} Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.810764 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n46p8" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.811957 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-mgkjx" podStartSLOduration=2.690367483 podStartE2EDuration="36.811936728s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:30.98032115 +0000 UTC m=+1363.397339118" lastFinishedPulling="2026-02-17 16:18:05.101890375 +0000 UTC m=+1397.518908363" observedRunningTime="2026-02-17 16:18:05.799327137 +0000 UTC m=+1398.216345125" watchObservedRunningTime="2026-02-17 16:18:05.811936728 +0000 UTC m=+1398.228954706" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.295051 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" path="/var/lib/kubelet/pods/9b4eb784-8c4c-4875-ae8f-e8882eb9989f/volumes" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.697417 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.816243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.816284 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.818222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerStarted","Data":"0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.821461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerStarted","Data":"3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.843825 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8s649" podStartSLOduration=3.515843442 podStartE2EDuration="37.84380678s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.329216521 +0000 UTC m=+1363.746234499" lastFinishedPulling="2026-02-17 16:18:05.657179859 +0000 UTC m=+1398.074197837" observedRunningTime="2026-02-17 16:18:06.836954615 +0000 UTC m=+1399.253972593" watchObservedRunningTime="2026-02-17 16:18:06.84380678 +0000 UTC m=+1399.260824748" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.857552 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tpsml" podStartSLOduration=15.85753647 podStartE2EDuration="15.85753647s" podCreationTimestamp="2026-02-17 16:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:06.85346091 +0000 UTC m=+1399.270478888" watchObservedRunningTime="2026-02-17 16:18:06.85753647 +0000 UTC m=+1399.274554448" Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.834680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.840043 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.845083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.845129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"26df09ac78a076eb0f2fab2e97427288c9dbe4295d421971b90f039ccad0b50a"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.868610 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.868573889 podStartE2EDuration="16.868573889s" podCreationTimestamp="2026-02-17 16:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:07.863814601 +0000 UTC m=+1400.280832589" watchObservedRunningTime="2026-02-17 16:18:07.868573889 +0000 UTC m=+1400.285591867" Feb 17 16:18:08 crc kubenswrapper[4829]: I0217 16:18:08.865389 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9"} Feb 17 16:18:08 crc kubenswrapper[4829]: I0217 16:18:08.906763 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.906738181 podStartE2EDuration="9.906738181s" podCreationTimestamp="2026-02-17 16:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:08.898754646 +0000 UTC m=+1401.315772624" watchObservedRunningTime="2026-02-17 16:18:08.906738181 +0000 UTC m=+1401.323756159" Feb 17 16:18:09 crc kubenswrapper[4829]: I0217 16:18:09.882785 4829 generic.go:334] "Generic (PLEG): container finished" podID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerID="3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6" exitCode=0 Feb 17 16:18:09 crc kubenswrapper[4829]: I0217 16:18:09.882943 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerDied","Data":"3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6"} Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.168857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.168907 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.212235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.213064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902655 4829 generic.go:334] "Generic (PLEG): container finished" podID="8ff4740d-5b36-4273-be02-50bec771e157" containerID="0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2" exitCode=0 Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerDied","Data":"0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2"} Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902925 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.903230 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:11 crc kubenswrapper[4829]: I0217 16:18:11.913886 4829 generic.go:334] "Generic (PLEG): container finished" podID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerID="1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115" exitCode=0 Feb 17 16:18:11 crc kubenswrapper[4829]: I0217 16:18:11.913952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerDied","Data":"1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115"} Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.032224 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.032280 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.091265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.096878 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.929307 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.929339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.790138 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.798175 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.804056 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869402 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869478 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869521 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869636 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869720 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869802 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869822 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869893 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869921 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869940 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869995 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.876930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.880042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts" (OuterVolumeSpecName: "scripts") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.880858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h" (OuterVolumeSpecName: "kube-api-access-24h9h") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "kube-api-access-24h9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.881709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs" (OuterVolumeSpecName: "logs") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.881994 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br" (OuterVolumeSpecName: "kube-api-access-lj6br") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "kube-api-access-lj6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.884307 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg" (OuterVolumeSpecName: "kube-api-access-vkjbg") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "kube-api-access-vkjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.885720 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts" (OuterVolumeSpecName: "scripts") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.897229 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.915090 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config" (OuterVolumeSpecName: "config") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.923908 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data" (OuterVolumeSpecName: "config-data") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.926417 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data" (OuterVolumeSpecName: "config-data") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.929140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.942799 4829 generic.go:334] "Generic (PLEG): container finished" podID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerID="7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96" exitCode=0 Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.942848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerDied","Data":"7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945002 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945011 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerDied","Data":"b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945052 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerDied","Data":"7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946490 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.949043 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.955391 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.956215 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.964565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerDied","Data":"d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.964647 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975123 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975188 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975217 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975238 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975255 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975270 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975285 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975301 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975318 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975637 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976089 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976098 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976107 4829 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976116 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.206434 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.206989 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207009 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207029 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207038 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207050 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207058 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207074 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207082 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207098 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="init" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207109 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="init" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207416 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207440 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207462 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207484 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.209027 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.221365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289506 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289622 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289663 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.351996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.354319 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.361693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.361999 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.362109 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.362211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfff2" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.366788 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390760 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390823 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390843 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390919 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390950 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390972 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390991 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391008 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391052 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391080 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.393459 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.400714 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.400883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.417550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493575 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.497212 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.497682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.498927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.499851 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.513254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.533081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.688112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.970390 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.972437 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978144 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978518 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979071 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979175 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979272 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003414 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003756 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003958 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.006467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.021211 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.053645 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0"} Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.063263 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.066186 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.070861 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p9cb5" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071145 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071238 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071397 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071439 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.082178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109712 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109787 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109865 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109968 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110029 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110066 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110191 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110226 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.118908 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.136435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.137200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.167693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171116 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171749 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.177663 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227180 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227216 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227255 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227306 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.230563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.232182 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.237445 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.239439 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.239665 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.240304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.250364 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.261213 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.308167 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.335213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.373735 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.375902 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.400354 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439224 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439291 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439367 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439387 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439432 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541196 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541492 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541531 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541549 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.543460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.543515 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.546499 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.548996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.549631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.553099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.554283 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.554658 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.563263 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.565748 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.628911 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648450 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.663275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx" (OuterVolumeSpecName: "kube-api-access-tzhzx") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "kube-api-access-tzhzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.694162 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:15 crc kubenswrapper[4829]: W0217 16:18:15.697628 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5 WatchSource:0}: Error finding container b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5: Status 404 returned error can't find the container with id b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5 Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.697937 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.762189 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.791768 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:15 crc kubenswrapper[4829]: E0217 16:18:15.794605 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.794638 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.795032 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.797333 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.830644 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.864926 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.864962 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865123 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865744 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.912009 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data" (OuterVolumeSpecName: "config-data") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.958795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.958958 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.959046 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968603 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.969553 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.971861 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.976038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.994815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.132816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerDied","Data":"0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.133017 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.133072 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.156643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.157103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.189850 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192342 4829 generic.go:334] "Generic (PLEG): container finished" podID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerID="496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623" exitCode=0 Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192417 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerStarted","Data":"38d0e25b8babc9cbba47e39ba8aa5d5221b3d6a4b4fa42411be271008d0092b7"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.205682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.376800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.377032 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.116784 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.251565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.267549 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerStarted","Data":"ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.270388 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.312947 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.315082 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868ff7b66c-lx7qv" event={"ID":"c2a8da85-ca3d-4368-8a34-4db948e7f6f3","Type":"ContainerStarted","Data":"293cf971e77cfa7e607294baa6a2d1b813e217e1034d8b25d770660e55413394"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.331225 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.332884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"99e7419feafe64980110b2189931ffa931f5a97e2e78bd4c9d2b0c71000b41c8"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.336309 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.342053 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" podStartSLOduration=3.342014305 podStartE2EDuration="3.342014305s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:17.296426334 +0000 UTC m=+1409.713444312" watchObservedRunningTime="2026-02-17 16:18:17.342014305 +0000 UTC m=+1409.759032293" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.402875 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.405680 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.414125 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.414561 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.421383 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.551737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.552259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.552564 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.555956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.555994 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.556073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.556116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658231 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658448 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.664219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.664318 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.674659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.674659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.675731 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.680457 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.681951 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.906230 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.368261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868ff7b66c-lx7qv" event={"ID":"c2a8da85-ca3d-4368-8a34-4db948e7f6f3","Type":"ContainerStarted","Data":"8096b48936ccfe75f025d4625655ea441fda4c4d7d6cc2afe71cf8d7df1d1f16"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.371695 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.383648 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.383797 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" containerID="cri-o://92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" gracePeriod=30 Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.384032 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.384066 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" containerID="cri-o://039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" gracePeriod=30 Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.403726 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"3964b3018b66ff82b3ca2cedd3b20a2a9b4c48bf635ff2c298427c883ec8e0fd"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.403770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"c1ab826ad101ffe475ca27f698998fe44a0abc2c600d5408ea2efc5987d8ecc6"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.404933 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.404957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.417734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.417773 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.418542 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.426568 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.465620 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59566c7c9b-gpfcg" podStartSLOduration=3.465603964 podStartE2EDuration="3.465603964s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.461150344 +0000 UTC m=+1410.878168322" watchObservedRunningTime="2026-02-17 16:18:18.465603964 +0000 UTC m=+1410.882621932" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.519956 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c89899bcb-82htl" podStartSLOduration=3.5199380209999998 podStartE2EDuration="3.519938021s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.518289487 +0000 UTC m=+1410.935307465" watchObservedRunningTime="2026-02-17 16:18:18.519938021 +0000 UTC m=+1410.936955989" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.557800 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-868ff7b66c-lx7qv" podStartSLOduration=4.557775813 podStartE2EDuration="4.557775813s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.538693387 +0000 UTC m=+1410.955711365" watchObservedRunningTime="2026-02-17 16:18:18.557775813 +0000 UTC m=+1410.974793791" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.610704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.615204 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b56799c5b-dmgjh" podStartSLOduration=4.615186293 podStartE2EDuration="4.615186293s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.568980155 +0000 UTC m=+1410.985998133" watchObservedRunningTime="2026-02-17 16:18:18.615186293 +0000 UTC m=+1411.032204271" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.641637 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b8b56fc4d-7pnvr" podStartSLOduration=3.641555705 podStartE2EDuration="3.641555705s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.58879282 +0000 UTC m=+1411.005810798" watchObservedRunningTime="2026-02-17 16:18:18.641555705 +0000 UTC m=+1411.058573683" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.454396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerStarted","Data":"b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.458626 4829 generic.go:334] "Generic (PLEG): container finished" podID="75783ffe-a672-4585-ae18-3c162d659ee7" containerID="039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" exitCode=0 Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.458691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.471816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"6c93a3a441ec63ea8f746c6d191f2df358ac22c0b4d899fccc8037364ad61f88"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472074 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"4f68087d01fd3239a42bef0a703c07fabdfa9de4a1539117eb8d4c29d0d0c066"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472088 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"790783a5b1b8d3209886a56ceddaa256888f2baf4b645b85a1d169eec7f9c40d"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472971 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472997 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.474271 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xh926" podStartSLOduration=3.038650875 podStartE2EDuration="50.474254798s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.33884914 +0000 UTC m=+1363.755867118" lastFinishedPulling="2026-02-17 16:18:18.774453063 +0000 UTC m=+1411.191471041" observedRunningTime="2026-02-17 16:18:19.467023173 +0000 UTC m=+1411.884041151" watchObservedRunningTime="2026-02-17 16:18:19.474254798 +0000 UTC m=+1411.891272776" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.495205 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5598cc6dcc-p2b29" podStartSLOduration=2.495183933 podStartE2EDuration="2.495183933s" podCreationTimestamp="2026-02-17 16:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:19.48949539 +0000 UTC m=+1411.906513368" watchObservedRunningTime="2026-02-17 16:18:19.495183933 +0000 UTC m=+1411.912201911" Feb 17 16:18:20 crc kubenswrapper[4829]: I0217 16:18:20.487572 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:21 crc kubenswrapper[4829]: I0217 16:18:21.498640 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerStarted","Data":"e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6"} Feb 17 16:18:21 crc kubenswrapper[4829]: I0217 16:18:21.517589 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n46p8" podStartSLOduration=3.809259693 podStartE2EDuration="52.51755162s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.385716826 +0000 UTC m=+1363.802734804" lastFinishedPulling="2026-02-17 16:18:20.094008753 +0000 UTC m=+1412.511026731" observedRunningTime="2026-02-17 16:18:21.516142452 +0000 UTC m=+1413.933160440" watchObservedRunningTime="2026-02-17 16:18:21.51755162 +0000 UTC m=+1413.934569608" Feb 17 16:18:22 crc kubenswrapper[4829]: I0217 16:18:22.531473 4829 generic.go:334] "Generic (PLEG): container finished" podID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerID="b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc" exitCode=0 Feb 17 16:18:22 crc kubenswrapper[4829]: I0217 16:18:22.531583 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerDied","Data":"b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc"} Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.351349 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421352 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421626 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.428999 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7" (OuterVolumeSpecName: "kube-api-access-8lrq7") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "kube-api-access-8lrq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.448553 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.465360 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523829 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523865 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523874 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.534469 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552331 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerDied","Data":"c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8"} Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552368 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552380 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.628585 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.628863 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" containerID="cri-o://4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" gracePeriod=10 Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.822185 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:24 crc kubenswrapper[4829]: E0217 16:18:24.822914 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.822935 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.823146 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.824380 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.826789 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-68q4f" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.827112 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.830483 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.854345 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.856350 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.862989 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.894985 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944413 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944508 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944610 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944679 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944703 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944760 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944806 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.945050 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.992261 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.994367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.015113 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048372 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048414 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048435 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048836 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049035 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.073360 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.084623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.086418 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.089382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.100218 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.100485 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.103123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.112646 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.117391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.119422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.123655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.124143 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.127802 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.129392 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158240 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158332 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158375 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158668 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158704 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158743 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158815 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.159463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.163065 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171182 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171515 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.184115 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.194711 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.197038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265121 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265450 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265558 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.273764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.277056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.277481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.281165 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.304152 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.335227 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.357817 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.467269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.566144 4829 generic.go:334] "Generic (PLEG): container finished" podID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerID="4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" exitCode=0 Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.566199 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.186849 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293438 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293481 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293654 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293728 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.312277 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p" (OuterVolumeSpecName: "kube-api-access-rg66p") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "kube-api-access-rg66p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.398485 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.405904 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.411127 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.415433 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.420595 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.424834 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.437753 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config" (OuterVolumeSpecName: "config") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517210 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517469 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517478 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517487 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.553249 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"de029d86f193dd1c04a644dfbce66d4d5a98f68124c1549de6eaa99d3eb1caa6"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580063 4829 scope.go:117] "RemoveContainer" containerID="4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580196 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.583162 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerStarted","Data":"62bf9e0fd2a55d71204acfd621962b635d4b2d6d5394b119cd1c1782a276bc21"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587558 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" containerID="cri-o://9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587838 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588102 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" containerID="cri-o://bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588149 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" containerID="cri-o://2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588184 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" containerID="cri-o://4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.628083 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.39316978 podStartE2EDuration="57.628066002s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.840323232 +0000 UTC m=+1364.257341210" lastFinishedPulling="2026-02-17 16:18:26.075219454 +0000 UTC m=+1418.492237432" observedRunningTime="2026-02-17 16:18:26.606955142 +0000 UTC m=+1419.023973120" watchObservedRunningTime="2026-02-17 16:18:26.628066002 +0000 UTC m=+1419.045083980" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.633040 4829 scope.go:117] "RemoveContainer" containerID="1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.688276 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.716756 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.015037 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.034539 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.045149 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.601880 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"544293b5af95509fff3676402e367e7f68e9f514d3e3ad411d8004de6b4de9e6"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615525 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" exitCode=2 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615554 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615561 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615635 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615662 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.623189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"e7113f27d1b432f6c47123480b460d226a2414586cf047a6acf509c9bb1d2e5e"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630333 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630346 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630598 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630640 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.638115 4829 generic.go:334] "Generic (PLEG): container finished" podID="1665c777-7859-4f39-a063-275485b6321c" containerID="a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.638159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.654681 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podStartSLOduration=2.654660891 podStartE2EDuration="2.654660891s" podCreationTimestamp="2026-02-17 16:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:27.653975052 +0000 UTC m=+1420.070993060" watchObservedRunningTime="2026-02-17 16:18:27.654660891 +0000 UTC m=+1420.071678879" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.261123 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:28 crc kubenswrapper[4829]: E0217 16:18:28.262145 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262169 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: E0217 16:18:28.262225 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="init" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262235 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="init" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262546 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.264275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.266997 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.267288 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.303421 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" path="/var/lib/kubelet/pods/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb/volumes" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.304074 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376724 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376784 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376914 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376961 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.377018 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.377070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479300 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480095 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.481063 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.485006 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.485044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.486391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.493078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.495785 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.497382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.625105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.651072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerStarted","Data":"faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173"} Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.651211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.653855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerDied","Data":"e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6"} Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.654268 4829 generic.go:334] "Generic (PLEG): container finished" podID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerID="e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6" exitCode=0 Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.679887 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" podStartSLOduration=4.679867822 podStartE2EDuration="4.679867822s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:28.669283807 +0000 UTC m=+1421.086301785" watchObservedRunningTime="2026-02-17 16:18:28.679867822 +0000 UTC m=+1421.096885790" Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.572285 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:29 crc kubenswrapper[4829]: W0217 16:18:29.587336 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod652438ae_668e_4017_a88c_c6737fd0db78.slice/crio-e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1 WatchSource:0}: Error finding container e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1: Status 404 returned error can't find the container with id e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1 Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.669965 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"eb3b40e87ffac66715998434cf10dc5fc9dcbf85032c3f8e07aef7c8d4a2a0b6"} Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.674099 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"82ebfe753beefc9f7891ec2ff2758c732af241abd532751ccfedd636aa50a2f0"} Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.676294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.039866 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125545 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125638 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125704 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125880 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125971 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.126052 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.131675 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.135992 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts" (OuterVolumeSpecName: "scripts") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.136083 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.136089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x" (OuterVolumeSpecName: "kube-api-access-js29x") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "kube-api-access-js29x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.173353 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.213693 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data" (OuterVolumeSpecName: "config-data") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228218 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228251 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228271 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228279 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228289 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.690301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"c2cc487209d11dd5958d6dcb029007ec83eaf2645cbae4205326dabe14bcc186"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"2e961bc610251c1ba1fa6161ac0bdfac9cfdd30ee02b2dd2de841f591598872c"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693613 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693633 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"af70644eb88d7fe0e69e15f4389b7136078e0535542f662edd9ae2d09fbfb118"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.696926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"4639261727b0d8cf3bc0404bc0629163a34a5a1de1a0b8aacb6866651c8d1fbc"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerDied","Data":"8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699223 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699223 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.739896 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" podStartSLOduration=4.666176773 podStartE2EDuration="6.739873724s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="2026-02-17 16:18:27.022107001 +0000 UTC m=+1419.439124979" lastFinishedPulling="2026-02-17 16:18:29.095803962 +0000 UTC m=+1421.512821930" observedRunningTime="2026-02-17 16:18:30.725020944 +0000 UTC m=+1423.142038932" watchObservedRunningTime="2026-02-17 16:18:30.739873724 +0000 UTC m=+1423.156891712" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.772669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-744588c6bd-fsx8x" podStartSLOduration=2.77081679 podStartE2EDuration="2.77081679s" podCreationTimestamp="2026-02-17 16:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:30.749845935 +0000 UTC m=+1423.166863913" watchObservedRunningTime="2026-02-17 16:18:30.77081679 +0000 UTC m=+1423.187834768" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.784237 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-765797c7c9-2cts6" podStartSLOduration=4.707947841 podStartE2EDuration="6.784220562s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="2026-02-17 16:18:27.029535681 +0000 UTC m=+1419.446553659" lastFinishedPulling="2026-02-17 16:18:29.105808402 +0000 UTC m=+1421.522826380" observedRunningTime="2026-02-17 16:18:30.776190975 +0000 UTC m=+1423.193208953" watchObservedRunningTime="2026-02-17 16:18:30.784220562 +0000 UTC m=+1423.201238540" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.019247 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: E0217 16:18:31.019794 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.019813 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.020039 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.026979 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.029115 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8kvfc" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.031005 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.033950 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.034268 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.038561 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049377 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049394 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049421 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.099102 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.099310 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" containerID="cri-o://faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" gracePeriod=10 Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.139400 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.141978 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155727 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155824 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155867 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155885 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155903 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155932 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155951 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155978 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156844 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.157305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.167932 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.168692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.168769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.171119 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.200210 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.277854 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280203 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280779 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.281746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.295816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.296760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.297434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.298211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.335545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.376094 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.497412 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.514068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.520211 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.533531 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.601632 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.711862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.711909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712406 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.769181 4829 generic.go:334] "Generic (PLEG): container finished" podID="1665c777-7859-4f39-a063-275485b6321c" containerID="faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" exitCode=0 Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.770307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173"} Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.814944 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815249 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815401 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815447 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815469 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815534 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.816322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.824556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.833449 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.833696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.834170 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.841981 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.846555 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.019830 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027513 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027561 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027709 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027911 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.035730 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6" (OuterVolumeSpecName: "kube-api-access-2v2m6") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "kube-api-access-2v2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.112428 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.131052 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.131994 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.132113 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.132222 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.151168 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.173106 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config" (OuterVolumeSpecName: "config") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.179134 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.221274 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233807 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233837 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233848 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.424650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.629695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:32 crc kubenswrapper[4829]: W0217 16:18:32.633992 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0 WatchSource:0}: Error finding container af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0: Status 404 returned error can't find the container with id af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0 Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.787230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.789681 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerStarted","Data":"25c76158cbbd089e89beb231349a135df7ab735e2a004c66b802c8527397a342"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794176 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"62bf9e0fd2a55d71204acfd621962b635d4b2d6d5394b119cd1c1782a276bc21"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794258 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794411 4829 scope.go:117] "RemoveContainer" containerID="faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.799147 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.841618 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.859032 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.878900 4829 scope.go:117] "RemoveContainer" containerID="a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.947830 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:32 crc kubenswrapper[4829]: E0217 16:18:32.948380 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="init" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="init" Feb 17 16:18:32 crc kubenswrapper[4829]: E0217 16:18:32.948428 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948437 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948754 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.952374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.986642 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154857 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.155336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.155680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.181117 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.281396 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.570650 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.816056 4829 generic.go:334] "Generic (PLEG): container finished" podID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" exitCode=0 Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.816092 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022"} Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.923556 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.005659 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.045081 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: W0217 16:18:34.045642 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb993f64_fe54_4fed_9aca_68e11a71eee7.slice/crio-0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086 WatchSource:0}: Error finding container 0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086: Status 404 returned error can't find the container with id 0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.299143 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1665c777-7859-4f39-a063-275485b6321c" path="/var/lib/kubelet/pods/1665c777-7859-4f39-a063-275485b6321c/volumes" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.857720 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858411 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" containerID="cri-o://7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" gracePeriod=30 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858664 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858860 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" containerID="cri-o://98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" gracePeriod=30 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869702 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7" exitCode=0 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869878 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.882863 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.895062 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.89504526 podStartE2EDuration="3.89504526s" podCreationTimestamp="2026-02-17 16:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:34.880516888 +0000 UTC m=+1427.297534866" watchObservedRunningTime="2026-02-17 16:18:34.89504526 +0000 UTC m=+1427.312063238" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.898920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerStarted","Data":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.899036 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.930847 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" podStartSLOduration=3.930829246 podStartE2EDuration="3.930829246s" podCreationTimestamp="2026-02-17 16:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:34.922081309 +0000 UTC m=+1427.339099287" watchObservedRunningTime="2026-02-17 16:18:34.930829246 +0000 UTC m=+1427.347847224" Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.927252 4829 generic.go:334] "Generic (PLEG): container finished" podID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerID="7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" exitCode=143 Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.927520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499"} Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.958703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.376744 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.733186 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.439347586 podStartE2EDuration="6.733169271s" podCreationTimestamp="2026-02-17 16:18:30 +0000 UTC" firstStartedPulling="2026-02-17 16:18:32.21777029 +0000 UTC m=+1424.634788268" lastFinishedPulling="2026-02-17 16:18:33.511591975 +0000 UTC m=+1425.928609953" observedRunningTime="2026-02-17 16:18:36.009903752 +0000 UTC m=+1428.426921730" watchObservedRunningTime="2026-02-17 16:18:36.733169271 +0000 UTC m=+1429.150187249" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.738813 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.742842 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.764204 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961734 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.962521 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.962693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.978391 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357"} Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.988680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.101303 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.730187 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.733759 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.753361 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782461 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782511 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.886310 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.894347 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.905378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: W0217 16:18:37.915170 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd8f257_bfbb_4393_b0b3_f1c955a73e05.slice/crio-8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61 WatchSource:0}: Error finding container 8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61: Status 404 returned error can't find the container with id 8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61 Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.999805 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerStarted","Data":"8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61"} Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.013395 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357" exitCode=0 Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.013475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357"} Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.103420 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.659645 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.785985 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.038463 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e" exitCode=0 Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.038554 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.051052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.051096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.055663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.141151 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jpmqj" podStartSLOduration=3.534116666 podStartE2EDuration="7.14113322s" podCreationTimestamp="2026-02-17 16:18:32 +0000 UTC" firstStartedPulling="2026-02-17 16:18:34.873759815 +0000 UTC m=+1427.290777793" lastFinishedPulling="2026-02-17 16:18:38.480776369 +0000 UTC m=+1430.897794347" observedRunningTime="2026-02-17 16:18:39.108271392 +0000 UTC m=+1431.525289370" watchObservedRunningTime="2026-02-17 16:18:39.14113322 +0000 UTC m=+1431.558151198" Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.158975 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:40 crc kubenswrapper[4829]: I0217 16:18:40.073470 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" exitCode=0 Feb 17 16:18:40 crc kubenswrapper[4829]: I0217 16:18:40.076993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.086521 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c" exitCode=0 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.086586 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c"} Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.134392 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.180473 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.255890 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.256136 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" containerID="cri-o://59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" gracePeriod=30 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.256298 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" containerID="cri-o://5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" gracePeriod=30 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.262695 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": EOF" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.499778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.558119 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.558370 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" containerID="cri-o://ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" gracePeriod=10 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.034527 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.097509 4829 generic.go:334] "Generic (PLEG): container finished" podID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerID="ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" exitCode=0 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.097604 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba"} Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.101447 4829 generic.go:334] "Generic (PLEG): container finished" podID="6f8d0651-0829-4225-b98a-ffb3453058db" containerID="59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" exitCode=143 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.101750 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91"} Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.931647 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046142 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046203 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046246 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046320 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046398 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046469 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.053362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp" (OuterVolumeSpecName: "kube-api-access-5rfwp") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "kube-api-access-5rfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.106085 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.123788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"38d0e25b8babc9cbba47e39ba8aa5d5221b3d6a4b4fa42411be271008d0092b7"} Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.123864 4829 scope.go:117] "RemoveContainer" containerID="ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.124049 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.132445 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.141644 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.147051 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149521 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149566 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149597 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149611 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149624 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.164243 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config" (OuterVolumeSpecName: "config") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.251833 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.283027 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.283069 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.422191 4829 scope.go:117] "RemoveContainer" containerID="496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.464410 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.480740 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.293419 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" path="/var/lib/kubelet/pods/d9d1bf31-65a7-4292-b06e-4f862ba023da/volumes" Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.341928 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:44 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:44 crc kubenswrapper[4829]: > Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.689959 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.192:9696/\": dial tcp 10.217.0.192:9696: connect: connection refused" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.155511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerStarted","Data":"4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71"} Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.158349 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.177943 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g92l5" podStartSLOduration=4.368763597 podStartE2EDuration="9.177922161s" podCreationTimestamp="2026-02-17 16:18:36 +0000 UTC" firstStartedPulling="2026-02-17 16:18:39.043952415 +0000 UTC m=+1431.460970393" lastFinishedPulling="2026-02-17 16:18:43.853110979 +0000 UTC m=+1436.270128957" observedRunningTime="2026-02-17 16:18:45.173403479 +0000 UTC m=+1437.590421457" watchObservedRunningTime="2026-02-17 16:18:45.177922161 +0000 UTC m=+1437.594940149" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.877063 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:48766->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.877092 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:48752->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:18:45 crc kubenswrapper[4829]: W0217 16:18:45.966395 4829 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f8d0651_0829_4225_b98a_ffb3453058db.slice/crio-550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c": error while statting cgroup v2: [read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f8d0651_0829_4225_b98a_ffb3453058db.slice/crio-550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c/pids.current: no such device], continuing to push stats Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.175425 4829 generic.go:334] "Generic (PLEG): container finished" podID="6f8d0651-0829-4225-b98a-ffb3453058db" containerID="5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" exitCode=0 Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.175722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34"} Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.177912 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" exitCode=0 Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.177979 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.383198 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.433994 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.557896 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629635 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629752 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.630001 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.631260 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs" (OuterVolumeSpecName: "logs") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.645415 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.670045 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57" (OuterVolumeSpecName: "kube-api-access-llm57") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "kube-api-access-llm57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.699213 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.733734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data" (OuterVolumeSpecName: "config-data") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736082 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736239 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736324 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736396 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736477 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.757883 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.061758 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.101984 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.102048 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191781 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c"} Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191853 4829 scope.go:117] "RemoveContainer" containerID="5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191859 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" containerID="cri-o://d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" gracePeriod=30 Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.192033 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.193349 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" containerID="cri-o://52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" gracePeriod=30 Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.235526 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.236369 4829 scope.go:117] "RemoveContainer" containerID="59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.248417 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.056865 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.185771 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.185968 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59566c7c9b-gpfcg" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" containerID="cri-o://894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" gracePeriod=30 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.186304 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59566c7c9b-gpfcg" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" containerID="cri-o://5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" gracePeriod=30 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.224807 4829 generic.go:334] "Generic (PLEG): container finished" podID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" exitCode=0 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.224926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.372212 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" path="/var/lib/kubelet/pods/6f8d0651-0829-4225-b98a-ffb3453058db/volumes" Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.598776 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:48 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:48 crc kubenswrapper[4829]: > Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340471 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340715 4829 generic.go:334] "Generic (PLEG): container finished" podID="75783ffe-a672-4585-ae18-3c162d659ee7" containerID="92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" exitCode=137 Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.372373 4829 generic.go:334] "Generic (PLEG): container finished" podID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerID="5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" exitCode=0 Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.372667 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.422081 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.467288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74rcl" podStartSLOduration=4.274159984 podStartE2EDuration="12.467266589s" podCreationTimestamp="2026-02-17 16:18:37 +0000 UTC" firstStartedPulling="2026-02-17 16:18:40.097149203 +0000 UTC m=+1432.514167181" lastFinishedPulling="2026-02-17 16:18:48.290255808 +0000 UTC m=+1440.707273786" observedRunningTime="2026-02-17 16:18:49.448951685 +0000 UTC m=+1441.865969663" watchObservedRunningTime="2026-02-17 16:18:49.467266589 +0000 UTC m=+1441.884284567" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.515981 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.516055 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637828 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637853 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637939 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637992 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.650474 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.651355 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh" (OuterVolumeSpecName: "kube-api-access-fdsqh") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "kube-api-access-fdsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.747094 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.747377 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.834936 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config" (OuterVolumeSpecName: "config") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.837132 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.849264 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.849307 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.902779 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.934126 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.942410 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.954835 4829 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056592 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056735 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056797 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.057885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.062823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.063169 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts" (OuterVolumeSpecName: "scripts") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.063773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf" (OuterVolumeSpecName: "kube-api-access-zprpf") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "kube-api-access-zprpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.151835 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.161233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171192 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171217 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171229 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171238 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171247 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.233843 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data" (OuterVolumeSpecName: "config-data") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.236361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.274139 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.310763 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.432213 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440802 4829 generic.go:334] "Generic (PLEG): container finished" podID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" exitCode=0 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440909 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440928 4829 scope.go:117] "RemoveContainer" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.441078 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.444780 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.444980 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c89899bcb-82htl" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" containerID="cri-o://0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" gracePeriod=30 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.445216 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.446009 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.446798 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c89899bcb-82htl" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" containerID="cri-o://03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" gracePeriod=30 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.632790 4829 scope.go:117] "RemoveContainer" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.653637 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.673883 4829 scope.go:117] "RemoveContainer" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.675056 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": container with ID starting with 52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6 not found: ID does not exist" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675086 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} err="failed to get container status \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": rpc error: code = NotFound desc = could not find container \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": container with ID starting with 52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6 not found: ID does not exist" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675107 4829 scope.go:117] "RemoveContainer" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.675484 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": container with ID starting with d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045 not found: ID does not exist" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675514 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} err="failed to get container status \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": rpc error: code = NotFound desc = could not find container \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": container with ID starting with d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045 not found: ID does not exist" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675529 4829 scope.go:117] "RemoveContainer" containerID="039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.676410 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.694383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.705356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719325 4829 scope.go:117] "RemoveContainer" containerID="92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719461 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719909 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719919 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719932 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719938 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719949 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719973 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="init" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719978 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="init" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719999 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720005 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720020 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720035 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720052 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720288 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720309 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720325 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720333 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720349 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720360 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720370 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.723253 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.729058 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.729201 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733762 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733866 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733979 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.734013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.734038 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835607 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835886 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835991 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.836010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.836651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.840237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.840927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.841166 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.842251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.859768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.040076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.462092 4829 generic.go:334] "Generic (PLEG): container finished" podID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerID="0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" exitCode=143 Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.462469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca"} Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.563029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.912400 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.291019 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" path="/var/lib/kubelet/pods/2407c845-36e5-40f1-ae75-2b6c5fc31624/volumes" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.292642 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" path="/var/lib/kubelet/pods/75783ffe-a672-4585-ae18-3c162d659ee7/volumes" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.509742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"28ac3de4c1a189d11613ed8d58c9c4b54a79c2bcb3247b57f94a9a0ff335382d"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.509784 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"69036bfd3fbb9296e310bf3a04b61aef294ebb90f30d53d6ab6e737f0c120606"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.536001 4829 generic.go:334] "Generic (PLEG): container finished" podID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerID="894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" exitCode=0 Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.536046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.549118 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.232463 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328776 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328916 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328962 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.329061 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.329119 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.344813 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.383192 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x" (OuterVolumeSpecName: "kube-api-access-r7x8x") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "kube-api-access-r7x8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.446807 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.446842 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.465788 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config" (OuterVolumeSpecName: "config") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.502352 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.550067 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.550407 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668"} Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577792 4829 scope.go:117] "RemoveContainer" containerID="5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577937 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.585287 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.602090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"2174bb841778409a7defc29514cec46ed8eaee6c9fd6801785291f62b2a0736b"} Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.652265 4829 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.694069 4829 scope.go:117] "RemoveContainer" containerID="894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.913172 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.913154264 podStartE2EDuration="3.913154264s" podCreationTimestamp="2026-02-17 16:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:53.625336482 +0000 UTC m=+1446.042354460" watchObservedRunningTime="2026-02-17 16:18:53.913154264 +0000 UTC m=+1446.330172242" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.919991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.930202 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.293760 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" path="/var/lib/kubelet/pods/d027908d-4d46-40f2-a1d9-a6353e1d17be/volumes" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.352808 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:54 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:54 crc kubenswrapper[4829]: > Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568246 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:54 crc kubenswrapper[4829]: E0217 16:18:54.568864 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568886 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: E0217 16:18:54.568938 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568946 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.569258 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.569284 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.570525 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.572608 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.573223 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lrgxv" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.574344 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.588743 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.639867 4829 generic.go:334] "Generic (PLEG): container finished" podID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerID="03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" exitCode=0 Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.640735 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c"} Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674501 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674558 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778187 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.780177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.784503 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.794011 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.805200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.890438 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.094150 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.185761 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.185874 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186244 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186308 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs" (OuterVolumeSpecName: "logs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186477 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186546 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.187508 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.199966 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts" (OuterVolumeSpecName: "scripts") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.201949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6" (OuterVolumeSpecName: "kube-api-access-v8mk6") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "kube-api-access-v8mk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.264461 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data" (OuterVolumeSpecName: "config-data") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291827 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291859 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291869 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.297393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.319787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.334484 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393446 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393478 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393488 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.465078 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.650936 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33"} Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.651202 4829 scope.go:117] "RemoveContainer" containerID="03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.650976 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.652423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4561ce68-ba71-42ad-95ec-de8b705a06ef","Type":"ContainerStarted","Data":"28b2e37b83015dfe816dba6c3ec6a070fe3a9ee96638e3d82b93345cb40a44f0"} Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.678872 4829 scope.go:117] "RemoveContainer" containerID="0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.700370 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.717262 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.041386 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.302751 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" path="/var/lib/kubelet/pods/e42d92c8-c673-4220-bee5-af7b9151fe77/volumes" Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.671943 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" exitCode=137 Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.672039 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816"} Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.163152 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234522 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234749 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234777 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234866 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234895 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.235815 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.235995 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.243452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts" (OuterVolumeSpecName: "scripts") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.247789 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx" (OuterVolumeSpecName: "kube-api-access-7vthx") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "kube-api-access-7vthx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.331775 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342006 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342036 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342045 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342053 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342062 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.397497 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.400396 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data" (OuterVolumeSpecName: "config-data") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.443902 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.443929 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690208 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412"} Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690258 4829 scope.go:117] "RemoveContainer" containerID="bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690379 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.724680 4829 scope.go:117] "RemoveContainer" containerID="2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.728384 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.738790 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748467 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748943 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748977 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748992 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748998 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749013 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749019 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749036 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749048 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749238 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749250 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749260 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749277 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749286 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749319 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.752177 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761507 4829 scope.go:117] "RemoveContainer" containerID="4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761691 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761724 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.762838 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.806699 4829 scope.go:117] "RemoveContainer" containerID="9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851396 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851516 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851533 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851907 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953067 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953219 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953239 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.958617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.962531 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.963652 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.963822 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.984341 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.080619 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.104516 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.104618 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.162526 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:58 crc kubenswrapper[4829]: > Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.186243 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.306052 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" path="/var/lib/kubelet/pods/eebac8aa-36b1-4a0d-9490-c34c7d137be2/volumes" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.624259 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.718012 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"b15b8a2c2fe4022bce337bd6c570aad6d1fe85a99014bfa877c56e943e1fb42f"} Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.775306 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.828564 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:59 crc kubenswrapper[4829]: I0217 16:18:59.732898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4"} Feb 17 16:19:00 crc kubenswrapper[4829]: I0217 16:19:00.747387 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74rcl" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" containerID="cri-o://801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" gracePeriod=2 Feb 17 16:19:00 crc kubenswrapper[4829]: I0217 16:19:00.747951 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.297969 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.304207 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314090 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314168 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314357 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nfxjw" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333275 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.351639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.388280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.389767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.399258 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.402684 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492230 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492297 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492320 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.506020 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.509312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.552989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.562479 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.580038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595221 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595736 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-utilities" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595749 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-utilities" Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595774 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-content" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595781 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-content" Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595788 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595795 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.596016 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.597157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604109 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.601235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604938 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604974 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605158 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605188 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605215 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605322 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605440 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605458 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605511 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities" (OuterVolumeSpecName: "utilities") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.607608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc" (OuterVolumeSpecName: "kube-api-access-xl6kc") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "kube-api-access-xl6kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.623677 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.640000 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.641347 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.645287 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.651948 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.697262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708681 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708725 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708748 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708952 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708990 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709076 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709114 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709183 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709196 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709969 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.710237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.710415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.711099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.711764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.728131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.740160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.740770 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.741039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.741773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.763142 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.765944 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770133 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" exitCode=0 Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770196 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770268 4829 scope.go:117] "RemoveContainer" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770593 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.825824 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826297 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.833377 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.833704 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.845617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.846249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.856850 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.885675 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.972681 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.974817 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.976979 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.977190 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.978433 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.997506 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.030509 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032354 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032470 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032497 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032587 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032678 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032774 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.035344 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.040423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125029 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125257 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" containerID="cri-o://9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" gracePeriod=30 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125718 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" containerID="cri-o://40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" gracePeriod=30 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134431 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134463 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134497 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.135411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.135491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.140557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.140655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.141773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.142002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.145657 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.158614 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.298811 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" path="/var/lib/kubelet/pods/8fb22913-2026-46cd-b4b8-5ac091e23320/volumes" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.301648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.781051 4829 generic.go:334] "Generic (PLEG): container finished" podID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerID="9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" exitCode=143 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.781093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b"} Feb 17 16:19:04 crc kubenswrapper[4829]: I0217 16:19:04.333519 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:19:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:19:04 crc kubenswrapper[4829]: > Feb 17 16:19:05 crc kubenswrapper[4829]: E0217 16:19:05.132703 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-conmon-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-conmon-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-conmon-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:05 crc kubenswrapper[4829]: E0217 16:19:05.132735 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-conmon-03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-conmon-0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-conmon-5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-conmon-bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-conmon-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-conmon-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-conmon-894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-conmon-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-conmon-d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.295999 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.834204 4829 generic.go:334] "Generic (PLEG): container finished" podID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerID="40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" exitCode=0 Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.834275 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9"} Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.838027 4829 generic.go:334] "Generic (PLEG): container finished" podID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerID="98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" exitCode=137 Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.838073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1"} Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.020897 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": dial tcp 10.217.0.205:8776: connect: connection refused" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.223408 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.302181 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.784528 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.786305 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.808624 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.818713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.818814 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.902603 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.904305 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920802 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.921998 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.923672 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.953959 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.985024 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.997268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.015369 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022880 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.023315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.024670 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.058229 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.060208 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.067838 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.103329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.103404 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125521 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.126290 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.163445 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.178298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.231368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.231487 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.232739 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.238170 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.256291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.271644 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:08 crc kubenswrapper[4829]: E0217 16:19:08.276165 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.325432 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.327336 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.333199 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.342086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.346543 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.446325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.446375 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.454416 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.456358 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.458105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.461273 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.467313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548746 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.549779 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.568501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.650599 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.650988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.651325 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.654819 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.673811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.775689 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.912868 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" containerID="cri-o://4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" gracePeriod=2 Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.940780 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.942374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.951959 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.979739 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.985969 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.036478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.067155 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.069035 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071476 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071612 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071935 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.150610 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182350 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182394 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182545 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182641 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182679 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182697 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182741 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182766 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.197655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.198912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.199689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.200332 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.202636 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.210461 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.215180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.226623 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.287987 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288072 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288168 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.297815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.300102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.302509 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.313619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.373691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.400210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.418155 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.927656 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" exitCode=0 Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.927718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71"} Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.273144 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.302584 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.326101 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.327734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.330612 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.330726 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.384508 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.386145 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.388968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.389818 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.393887 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.402856 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.414884 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.414999 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415081 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517595 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517899 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518030 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518056 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518085 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.537322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.537416 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539991 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.549506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620255 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620435 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620495 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.626508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.627645 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.628140 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.628688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.636680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.648016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.705445 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.714687 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:11 crc kubenswrapper[4829]: I0217 16:19:11.662214 4829 scope.go:117] "RemoveContainer" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:11 crc kubenswrapper[4829]: I0217 16:19:11.949924 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.001755 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61"} Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.001794 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091364 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091736 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.093249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities" (OuterVolumeSpecName: "utilities") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.098663 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7" (OuterVolumeSpecName: "kube-api-access-4f2c7") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "kube-api-access-4f2c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.125371 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.161492 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.166972 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195135 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195368 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195438 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.295901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296065 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296262 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296394 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296486 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296650 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296777 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297207 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs" (OuterVolumeSpecName: "logs") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297563 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297674 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.312076 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.313561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts" (OuterVolumeSpecName: "scripts") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.318917 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg" (OuterVolumeSpecName: "kube-api-access-pc9xg") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "kube-api-access-pc9xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.325107 4829 scope.go:117] "RemoveContainer" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.333799 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.393889 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408329 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408388 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408688 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408794 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409399 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409416 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409427 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409438 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409979 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs" (OuterVolumeSpecName: "logs") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.425975 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts" (OuterVolumeSpecName: "scripts") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.430987 4829 scope.go:117] "RemoveContainer" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.432920 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data" (OuterVolumeSpecName: "config-data") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.435282 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.444215 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": container with ID starting with 801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86 not found: ID does not exist" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.444255 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} err="failed to get container status \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": rpc error: code = NotFound desc = could not find container \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": container with ID starting with 801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86 not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.444285 4829 scope.go:117] "RemoveContainer" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.450724 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": container with ID starting with 823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f not found: ID does not exist" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.450759 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} err="failed to get container status \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": rpc error: code = NotFound desc = could not find container \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": container with ID starting with 823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.450784 4829 scope.go:117] "RemoveContainer" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.466645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b" (OuterVolumeSpecName: "kube-api-access-88n9b") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "kube-api-access-88n9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.473720 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": container with ID starting with 3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635 not found: ID does not exist" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.473765 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} err="failed to get container status \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": rpc error: code = NotFound desc = could not find container \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": container with ID starting with 3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635 not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.473793 4829 scope.go:117] "RemoveContainer" containerID="4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.487745 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519207 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519232 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519243 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519252 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519263 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.540310 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (OuterVolumeSpecName: "glance") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.621049 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.638898 4829 scope.go:117] "RemoveContainer" containerID="002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.846163 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.846565 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537") on node "crc" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.853546 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.877912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.917656 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data" (OuterVolumeSpecName: "config-data") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928160 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928189 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928200 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928212 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.012754 4829 scope.go:117] "RemoveContainer" containerID="c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.089409 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.089864 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" containerID="cri-o://1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090138 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090430 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" containerID="cri-o://8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090483 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" containerID="cri-o://8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090516 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" containerID="cri-o://8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.114723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerStarted","Data":"32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.123642 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.759696388 podStartE2EDuration="16.123622872s" podCreationTimestamp="2026-02-17 16:18:57 +0000 UTC" firstStartedPulling="2026-02-17 16:18:58.6127864 +0000 UTC m=+1451.029804378" lastFinishedPulling="2026-02-17 16:19:11.976712884 +0000 UTC m=+1464.393730862" observedRunningTime="2026-02-17 16:19:13.110467327 +0000 UTC m=+1465.527485315" watchObservedRunningTime="2026-02-17 16:19:13.123622872 +0000 UTC m=+1465.540640850" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127670 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127713 4829 scope.go:117] "RemoveContainer" containerID="98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.131098 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4561ce68-ba71-42ad-95ec-de8b705a06ef","Type":"ContainerStarted","Data":"32fa907e41420333e66cf2b4635d5ee91a924e5de9bf58928768552d6a7363bc"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.152248 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.129957342 podStartE2EDuration="19.152234555s" podCreationTimestamp="2026-02-17 16:18:54 +0000 UTC" firstStartedPulling="2026-02-17 16:18:55.469941779 +0000 UTC m=+1447.886959757" lastFinishedPulling="2026-02-17 16:19:11.492218982 +0000 UTC m=+1463.909236970" observedRunningTime="2026-02-17 16:19:13.149956243 +0000 UTC m=+1465.566974221" watchObservedRunningTime="2026-02-17 16:19:13.152234555 +0000 UTC m=+1465.569252533" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.153612 4829 scope.go:117] "RemoveContainer" containerID="7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.156248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"26df09ac78a076eb0f2fab2e97427288c9dbe4295d421971b90f039ccad0b50a"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.156430 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.180223 4829 scope.go:117] "RemoveContainer" containerID="40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.197107 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.227879 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.261031 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262277 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262384 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262446 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-content" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262504 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-content" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262568 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262720 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262790 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262848 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262909 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-utilities" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262960 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-utilities" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.263030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263083 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.263136 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263186 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263535 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263620 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263689 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263917 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263978 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265131 4829 scope.go:117] "RemoveContainer" containerID="9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265647 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265826 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.268232 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.270667 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.270982 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.289181 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.312692 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.332339 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.348334 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.348720 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.351238 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.351552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.353528 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.358680 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.389707 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.433487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550971 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550994 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551105 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551155 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551198 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656093 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656117 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656139 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656283 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656303 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656386 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656402 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656418 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656447 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656493 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.671175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.673162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.674349 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.676187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.676522 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.702378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.732811 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.738234 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.742998 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.743902 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.747624 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.747771 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748130 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748428 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748709 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.755221 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.774955 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.841420 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.841462 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.957556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.990386 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.037553 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227334 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" exitCode=2 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227657 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" exitCode=0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227726 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.245025 4829 generic.go:334] "Generic (PLEG): container finished" podID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerID="a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df" exitCode=0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.245083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerDied","Data":"a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.255649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"9b7829ddff737dae110188099ffcfcca290e157b306ee21c83290ddc54364056"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.260731 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" event={"ID":"5dfe4b1a-5f10-47f3-ab81-0807c468fab0","Type":"ContainerStarted","Data":"77da194c262ed24f7e5808a948240e19e60fa35611f92398267d500dd975f8ec"} Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.306691 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice/crio-ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb WatchSource:0}: Error finding container ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb: Status 404 returned error can't find the container with id ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.324224 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" path="/var/lib/kubelet/pods/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.325780 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" path="/var/lib/kubelet/pods/631fedb6-df0e-40fa-a86c-40cc89db194f/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.330963 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" path="/var/lib/kubelet/pods/dcd8f257-bfbb-4393-b0b3-f1c955a73e05/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.331966 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.332010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.332025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"3cfd5b4a2eec48fa3b356560508d7b1e10c91f89ca2f91c17d90090a20ce014f"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.337377 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.349656 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.360214 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.441049 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.479814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.499567 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca WatchSource:0}: Error finding container 0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca: Status 404 returned error can't find the container with id 0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.502246 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.530911 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice/crio-382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101 WatchSource:0}: Error finding container 382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101: Status 404 returned error can't find the container with id 382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.535827 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.556086 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.570485 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.583332 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.590772 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531a6d2a_8cc6_4d30_a906_826fba92e926.slice/crio-4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d WatchSource:0}: Error finding container 4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d: Status 404 returned error can't find the container with id 4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.594221 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe43e34b_d8ec_44cd_bc26_e0ce3c9797a7.slice/crio-e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0 WatchSource:0}: Error finding container e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0: Status 404 returned error can't find the container with id e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.599493 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.776328 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.995313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:15 crc kubenswrapper[4829]: W0217 16:19:15.088263 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4708c572_1818_4307_8667_0e2cb60f5635.slice/crio-6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a WatchSource:0}: Error finding container 6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a: Status 404 returned error can't find the container with id 6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.300248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"effd450865bb97a34c3515f6ac7f39ede1e9688582703d4a3c8820cf02cb2a03"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.313671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"4ef0f0fdd58c449b7bd153a2e6b41e72b42f83d436a32880335f79f65dd269bd"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.313717 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"b3a69a41237582e8aca84cc6f5a06a0f5de9dc81fff09c20093ef9e26ef4033b"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.314597 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.314651 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.321739 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"95375bc6f346a6fe6af46463b8db7c53fa38cd84c3783df66e0720a068bc27d4"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.323267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bf669c95c-g7msn" event={"ID":"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7","Type":"ContainerStarted","Data":"e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.328753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerStarted","Data":"8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.334605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7db87d5bbf-dtdjh" event={"ID":"59de3866-adfb-4a8d-87f2-b54af38332d0","Type":"ContainerStarted","Data":"b253dfec5873832620fdac0a570303465bbc77ba3023c843e4bde8980efbe498"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.334821 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d69d97dcf-pdd69" podStartSLOduration=14.334807497 podStartE2EDuration="14.334807497s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.332303609 +0000 UTC m=+1467.749321587" watchObservedRunningTime="2026-02-17 16:19:15.334807497 +0000 UTC m=+1467.751825465" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.344066 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerStarted","Data":"0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.348793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerStarted","Data":"92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.358441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerStarted","Data":"ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.363950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerStarted","Data":"d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.378162 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" exitCode=0 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.378280 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.382684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerStarted","Data":"382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.385722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerStarted","Data":"4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.388457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.393406 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerStarted","Data":"ad768e518034fae299e9c917a36a527e20f09615bf89f800e1faf24578b3afd0"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.406295 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3357-account-create-update-rg852" podStartSLOduration=7.406274627 podStartE2EDuration="7.406274627s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.375777143 +0000 UTC m=+1467.792795111" watchObservedRunningTime="2026-02-17 16:19:15.406274627 +0000 UTC m=+1467.823292605" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.411269 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" containerID="cri-o://c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" gracePeriod=2 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.412244 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerStarted","Data":"18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.412267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerStarted","Data":"456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123"} Feb 17 16:19:15 crc kubenswrapper[4829]: E0217 16:19:15.574760 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.950675 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-cnfbw" podStartSLOduration=8.950656365 podStartE2EDuration="8.950656365s" podCreationTimestamp="2026-02-17 16:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.438516697 +0000 UTC m=+1467.855534675" watchObservedRunningTime="2026-02-17 16:19:15.950656365 +0000 UTC m=+1468.367674333" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.960773 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.960988 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" containerID="cri-o://c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" gracePeriod=30 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.961118 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" containerID="cri-o://53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" gracePeriod=30 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.432621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7db87d5bbf-dtdjh" event={"ID":"59de3866-adfb-4a8d-87f2-b54af38332d0","Type":"ContainerStarted","Data":"93cdf8724baf647e738ca65ba597eb6d07b02bcc0c0078364e778089de2c195d"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.434221 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.438338 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcdf2448-5ccb-4351-b022-de49263fd521" containerID="a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.438386 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerDied","Data":"a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.441617 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.441683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.444097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"feecf691f350e4e4d2f1d885c2443527110811f43796b500d48e8dd87dbe621e"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.446680 4829 generic.go:334] "Generic (PLEG): container finished" podID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerID="a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.446723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.453529 4829 generic.go:334] "Generic (PLEG): container finished" podID="544f59e2-daea-45db-99b4-d9714f620a74" containerID="18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.453607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerDied","Data":"18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.459669 4829 generic.go:334] "Generic (PLEG): container finished" podID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerID="c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" exitCode=143 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.459710 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.461701 4829 generic.go:334] "Generic (PLEG): container finished" podID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerID="163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.461795 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerDied","Data":"163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.499217 4829 generic.go:334] "Generic (PLEG): container finished" podID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerID="19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.499293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerDied","Data":"19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.508339 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"c2c3295b07155a30b197a649d80dcf344571036b28fe9a727c6720bb13714e10"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.514915 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7db87d5bbf-dtdjh" podStartSLOduration=8.51489805 podStartE2EDuration="8.51489805s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:16.484519951 +0000 UTC m=+1468.901537919" watchObservedRunningTime="2026-02-17 16:19:16.51489805 +0000 UTC m=+1468.931916028" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.529926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerStarted","Data":"3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.531512 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.567784 4829 generic.go:334] "Generic (PLEG): container finished" podID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerID="7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.568815 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerDied","Data":"7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.631132 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podStartSLOduration=15.631113628 podStartE2EDuration="15.631113628s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:16.603512114 +0000 UTC m=+1469.020530102" watchObservedRunningTime="2026-02-17 16:19:16.631113628 +0000 UTC m=+1469.048131606" Feb 17 16:19:17 crc kubenswrapper[4829]: I0217 16:19:17.058393 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.223324 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.340793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.341157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.347960 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" (UID: "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.365950 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc" (OuterVolumeSpecName: "kube-api-access-k7jpc") pod "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" (UID: "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd"). InnerVolumeSpecName "kube-api-access-k7jpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.443751 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.443794 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.649835 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.652917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerDied","Data":"32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5"} Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.652960 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.419706 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.519369 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.563029 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.574889 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.574960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.576027 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" (UID: "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.576720 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.582153 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.592561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7" (OuterVolumeSpecName: "kube-api-access-6pzt7") pod "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" (UID: "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5"). InnerVolumeSpecName "kube-api-access-6pzt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.635825 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677128 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677203 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"544f59e2-daea-45db-99b4-d9714f620a74\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677318 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677346 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677427 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"544f59e2-daea-45db-99b4-d9714f620a74\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"c909da16-2d5d-4706-adb8-f8402ed9f01e\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"c909da16-2d5d-4706-adb8-f8402ed9f01e\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677856 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678321 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8a9c261-a9c4-49c8-bec3-891a68d897b6" (UID: "c8a9c261-a9c4-49c8-bec3-891a68d897b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678786 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678817 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.680325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities" (OuterVolumeSpecName: "utilities") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.681137 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "544f59e2-daea-45db-99b4-d9714f620a74" (UID: "544f59e2-daea-45db-99b4-d9714f620a74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.683835 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c909da16-2d5d-4706-adb8-f8402ed9f01e" (UID: "c909da16-2d5d-4706-adb8-f8402ed9f01e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerDied","Data":"ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698195 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698276 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703506 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerDied","Data":"456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703551 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703630 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.720010 4829 generic.go:334] "Generic (PLEG): container finished" podID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerID="53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.720723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerDied","Data":"8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740395 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740449 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.750676 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerDied","Data":"382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756290 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756978 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.766331 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx" (OuterVolumeSpecName: "kube-api-access-zkjwx") pod "c909da16-2d5d-4706-adb8-f8402ed9f01e" (UID: "c909da16-2d5d-4706-adb8-f8402ed9f01e"). InnerVolumeSpecName "kube-api-access-zkjwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.767249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7" (OuterVolumeSpecName: "kube-api-access-cgff7") pod "544f59e2-daea-45db-99b4-d9714f620a74" (UID: "544f59e2-daea-45db-99b4-d9714f620a74"). InnerVolumeSpecName "kube-api-access-cgff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.767735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr" (OuterVolumeSpecName: "kube-api-access-65prr") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "kube-api-access-65prr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.768746 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk" (OuterVolumeSpecName: "kube-api-access-nfxfk") pod "c8a9c261-a9c4-49c8-bec3-891a68d897b6" (UID: "c8a9c261-a9c4-49c8-bec3-891a68d897b6"). InnerVolumeSpecName "kube-api-access-nfxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785719 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerDied","Data":"92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785755 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785838 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.788172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"dcdf2448-5ccb-4351-b022-de49263fd521\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.788345 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"dcdf2448-5ccb-4351-b022-de49263fd521\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.799655 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcdf2448-5ccb-4351-b022-de49263fd521" (UID: "dcdf2448-5ccb-4351-b022-de49263fd521"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800155 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800183 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800210 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800222 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800236 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800246 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800265 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800278 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.847852 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc" (OuterVolumeSpecName: "kube-api-access-j6wlc") pod "dcdf2448-5ccb-4351-b022-de49263fd521" (UID: "dcdf2448-5ccb-4351-b022-de49263fd521"). InnerVolumeSpecName "kube-api-access-j6wlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851238 4829 scope.go:117] "RemoveContainer" containerID="c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851432 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.903043 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.903076 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.015533 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.057540 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.220057 4829 scope.go:117] "RemoveContainer" containerID="bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.296690 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" path="/var/lib/kubelet/pods/cb993f64-fe54-4fed-9aca-68e11a71eee7/volumes" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.497977 4829 scope.go:117] "RemoveContainer" containerID="aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.573365 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621257 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621674 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622131 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622301 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.624170 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs" (OuterVolumeSpecName: "logs") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.627196 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.667776 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts" (OuterVolumeSpecName: "scripts") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.667858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk" (OuterVolumeSpecName: "kube-api-access-rsjdk") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "kube-api-access-rsjdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.677846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725542 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725572 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725593 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725601 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725610 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.773683 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (OuterVolumeSpecName: "glance") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "pvc-60154460-e4e5-447b-9d26-02e14a9d8490". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.798513 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data" (OuterVolumeSpecName: "config-data") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.827833 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.828406 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.870809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.870871 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.877602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerStarted","Data":"28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.877738 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.885855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.885923 4829 scope.go:117] "RemoveContainer" containerID="53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.886067 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.888417 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podStartSLOduration=8.036838752 podStartE2EDuration="12.888399621s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.34904626 +0000 UTC m=+1466.766064238" lastFinishedPulling="2026-02-17 16:19:19.200607129 +0000 UTC m=+1471.617625107" observedRunningTime="2026-02-17 16:19:20.888254027 +0000 UTC m=+1473.305272005" watchObservedRunningTime="2026-02-17 16:19:20.888399621 +0000 UTC m=+1473.305417599" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.890842 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerStarted","Data":"7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.891094 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-58844cd98c-2snd2" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" containerID="cri-o://7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" gracePeriod=60 Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.891476 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.899977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.900254 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.912747 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" podStartSLOduration=19.912730718 podStartE2EDuration="19.912730718s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.908833493 +0000 UTC m=+1473.325851471" watchObservedRunningTime="2026-02-17 16:19:20.912730718 +0000 UTC m=+1473.329748696" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.929645 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.929793 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490") on node "crc" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.930493 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.946785 4829 scope.go:117] "RemoveContainer" containerID="c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.990144 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-58844cd98c-2snd2" podStartSLOduration=15.306071983 podStartE2EDuration="19.990122978s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.511804065 +0000 UTC m=+1466.928822043" lastFinishedPulling="2026-02-17 16:19:19.19585506 +0000 UTC m=+1471.612873038" observedRunningTime="2026-02-17 16:19:20.947297781 +0000 UTC m=+1473.364315759" watchObservedRunningTime="2026-02-17 16:19:20.990122978 +0000 UTC m=+1473.407140966" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.001772 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-647dbf4b4b-fgckf" podStartSLOduration=7.165747022 podStartE2EDuration="13.001748582s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="2026-02-17 16:19:13.323716064 +0000 UTC m=+1465.740734042" lastFinishedPulling="2026-02-17 16:19:19.159717624 +0000 UTC m=+1471.576735602" observedRunningTime="2026-02-17 16:19:20.963698134 +0000 UTC m=+1473.380716112" watchObservedRunningTime="2026-02-17 16:19:21.001748582 +0000 UTC m=+1473.418766560" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.089517 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.135219 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.306143 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.316941 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332091 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332589 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332606 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332617 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332633 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332667 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-content" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332674 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-content" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332685 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332691 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332701 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-utilities" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332707 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-utilities" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332714 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332721 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332735 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332741 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332758 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332763 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332772 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332778 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332788 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332794 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332802 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332808 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333005 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333020 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333033 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333044 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333052 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333059 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333068 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333078 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333090 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.334358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.337809 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.364269 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.366845 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.440817 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441766 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441793 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441867 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441905 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544486 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544695 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544825 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.545274 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.545482 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.549774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.550142 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.550171 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.562839 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.563294 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.568220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.582028 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.684624 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.913827 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"924d00ed836b571c32d69ecb057ea48470718059438d6e5408ef3d836d3a7a0e"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.914314 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.915414 4829 generic.go:334] "Generic (PLEG): container finished" podID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" exitCode=1 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.915475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.916116 4829 scope.go:117] "RemoveContainer" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.916845 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bf669c95c-g7msn" event={"ID":"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7","Type":"ContainerStarted","Data":"afa64044d9cc839b7e18d702eea2f9ae926189a112c5e5299c5ac2d9b45e2db9"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.917054 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerStarted","Data":"04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924283 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924291 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" containerID="cri-o://04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" gracePeriod=60 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927122 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" exitCode=1 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927891 4829 scope.go:117] "RemoveContainer" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.930888 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"fce6ee49837f18aeb4ef673987697711ac43588da91bd68ab4cd453076fb5ec7"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.935972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" event={"ID":"5dfe4b1a-5f10-47f3-ab81-0807c468fab0","Type":"ContainerStarted","Data":"8c1c1354bf0b94e8c9c24f6c40dda3774dc832ece8aab327d939ab39a2f29b5e"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.936647 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.943270 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.943253283 podStartE2EDuration="8.943253283s" podCreationTimestamp="2026-02-17 16:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:21.93017124 +0000 UTC m=+1474.347189218" watchObservedRunningTime="2026-02-17 16:19:21.943253283 +0000 UTC m=+1474.360271261" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.964767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.967811 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" podStartSLOduration=16.332165718 podStartE2EDuration="20.967794386s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.594709373 +0000 UTC m=+1467.011727351" lastFinishedPulling="2026-02-17 16:19:19.230338041 +0000 UTC m=+1471.647356019" observedRunningTime="2026-02-17 16:19:21.956021089 +0000 UTC m=+1474.373039067" watchObservedRunningTime="2026-02-17 16:19:21.967794386 +0000 UTC m=+1474.384812364" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.029013 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7bf669c95c-g7msn" podStartSLOduration=7.436842465 podStartE2EDuration="12.028996929s" podCreationTimestamp="2026-02-17 16:19:10 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.603716576 +0000 UTC m=+1467.020734554" lastFinishedPulling="2026-02-17 16:19:19.19587104 +0000 UTC m=+1471.612889018" observedRunningTime="2026-02-17 16:19:22.025366621 +0000 UTC m=+1474.442384619" watchObservedRunningTime="2026-02-17 16:19:22.028996929 +0000 UTC m=+1474.446014907" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.061486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" podStartSLOduration=6.187573284 podStartE2EDuration="12.061468516s" podCreationTimestamp="2026-02-17 16:19:10 +0000 UTC" firstStartedPulling="2026-02-17 16:19:13.324377373 +0000 UTC m=+1465.741395351" lastFinishedPulling="2026-02-17 16:19:19.198272605 +0000 UTC m=+1471.615290583" observedRunningTime="2026-02-17 16:19:22.055880995 +0000 UTC m=+1474.472898983" watchObservedRunningTime="2026-02-17 16:19:22.061468516 +0000 UTC m=+1474.478486494" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.108799 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.108780673 podStartE2EDuration="9.108780673s" podCreationTimestamp="2026-02-17 16:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:22.095168485 +0000 UTC m=+1474.512186463" watchObservedRunningTime="2026-02-17 16:19:22.108780673 +0000 UTC m=+1474.525798651" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.301207 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" path="/var/lib/kubelet/pods/c3f146bc-ed08-462a-9c4a-f5641b460469/volumes" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.309346 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.309399 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.430534 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.430916 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.703539 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.949510 4829 generic.go:334] "Generic (PLEG): container finished" podID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerID="04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" exitCode=0 Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.949588 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerDied","Data":"04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1"} Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.951285 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"ef332962cfbb0da0428cedc06ffb50074342b92fa6e7baf8ac870434bd9e9166"} Feb 17 16:19:23 crc kubenswrapper[4829]: E0217 16:19:23.562499 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.968643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.970301 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.986413 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.992386 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wx8s7" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.994106 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.994240 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030175 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030258 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030315 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030384 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.037964 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.039335 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.078370 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.117117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132753 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.133104 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.142811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.146379 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.153402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.155242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.291367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.400876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.425326 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.965026 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.975709 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.975999 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.976122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.976183 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.982742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.989709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk" (OuterVolumeSpecName: "kube-api-access-pqzqk") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "kube-api-access-pqzqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.030655 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052157 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerDied","Data":"4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052467 4829 scope.go:117] "RemoveContainer" containerID="04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052589 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.059885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.061689 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078884 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078924 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078935 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.079763 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080542 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data" (OuterVolumeSpecName: "config-data") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.181214 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.396038 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.407447 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.449044 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:25 crc kubenswrapper[4829]: W0217 16:19:25.453808 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443 WatchSource:0}: Error finding container 3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443: Status 404 returned error can't find the container with id 3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443 Feb 17 16:19:25 crc kubenswrapper[4829]: E0217 16:19:25.677385 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.098209 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"5c68e78e9dafd8fee502c806ca62674bf75ddb93f865f78af0d551b191fab20f"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.099535 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"22b2d64ca7d7156d906cb52a8ed5f292f4386365304da910768ec0db2d4c0335"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102588 4829 generic.go:334] "Generic (PLEG): container finished" podID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" exitCode=1 Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102721 4829 scope.go:117] "RemoveContainer" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.103657 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:26 crc kubenswrapper[4829]: E0217 16:19:26.104004 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.110788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerStarted","Data":"3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.125942 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" exitCode=1 Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.128018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.128074 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:26 crc kubenswrapper[4829]: E0217 16:19:26.129259 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647dbf4b4b-fgckf_openstack(cbedef6f-85e8-418a-b925-8d2a8e73bb5c)\"" pod="openstack/heat-api-647dbf4b4b-fgckf" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.157670 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.157643927 podStartE2EDuration="5.157643927s" podCreationTimestamp="2026-02-17 16:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:26.128208033 +0000 UTC m=+1478.545226011" watchObservedRunningTime="2026-02-17 16:19:26.157643927 +0000 UTC m=+1478.574661925" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.306994 4829 scope.go:117] "RemoveContainer" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.411306 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" path="/var/lib/kubelet/pods/531a6d2a-8cc6-4d30-a906-826fba92e926/volumes" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.031737 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.110424 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.110707 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" containerID="cri-o://0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" gracePeriod=10 Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.168827 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:27 crc kubenswrapper[4829]: E0217 16:19:27.169073 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.176833 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.176847 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:27 crc kubenswrapper[4829]: E0217 16:19:27.177030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647dbf4b4b-fgckf_openstack(cbedef6f-85e8-418a-b925-8d2a8e73bb5c)\"" pod="openstack/heat-api-647dbf4b4b-fgckf" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.836453 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967103 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967192 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967252 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967358 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967550 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:27.984275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx" (OuterVolumeSpecName: "kube-api-access-9ffxx") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "kube-api-access-9ffxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.045386 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.063823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.067089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.076087 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config" (OuterVolumeSpecName: "config") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.077266 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.087494 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093293 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093336 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093350 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093362 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093373 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093384 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196863 4829 generic.go:334] "Generic (PLEG): container finished" podID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" exitCode=0 Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"25c76158cbbd089e89beb231349a135df7ab735e2a004c66b802c8527397a342"} Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196984 4829 scope.go:117] "RemoveContainer" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.197117 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.236155 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.249777 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.289929 4829 scope.go:117] "RemoveContainer" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.308257 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" path="/var/lib/kubelet/pods/24a26c9f-0ba5-4714-9b6e-5319f3ed903a/volumes" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.342249 4829 scope.go:117] "RemoveContainer" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: E0217 16:19:28.343037 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": container with ID starting with 0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12 not found: ID does not exist" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343139 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} err="failed to get container status \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": rpc error: code = NotFound desc = could not find container \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": container with ID starting with 0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12 not found: ID does not exist" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343222 4829 scope.go:117] "RemoveContainer" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: E0217 16:19:28.343523 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": container with ID starting with 8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022 not found: ID does not exist" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343746 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022"} err="failed to get container status \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": rpc error: code = NotFound desc = could not find container \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": container with ID starting with 8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022 not found: ID does not exist" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.917517 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.927739 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.050954 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.400841 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.401107 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.401937 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.402217 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.474373 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.556996 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.557196 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" containerID="cri-o://3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" gracePeriod=60 Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.568059 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.568146 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.575240 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.583812 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.593609 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.624729 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.624795 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.761361 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970201 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970243 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.980856 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9" (OuterVolumeSpecName: "kube-api-access-cqhk9") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "kube-api-access-cqhk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.993772 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.025304 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073084 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073115 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073124 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.118410 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data" (OuterVolumeSpecName: "config-data") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.175558 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.230851 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.236865 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"9b7829ddff737dae110188099ffcfcca290e157b306ee21c83290ddc54364056"} Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.236966 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.273515 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.306244 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.706810 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.710954 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.715890 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.715927 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:31 crc kubenswrapper[4829]: I0217 16:19:31.966159 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:19:31 crc kubenswrapper[4829]: I0217 16:19:31.966227 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.027033 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.046197 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.259526 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.259561 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.303112 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" path="/var/lib/kubelet/pods/cbedef6f-85e8-418a-b925-8d2a8e73bb5c/volumes" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.891127 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.996147 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:33 crc kubenswrapper[4829]: I0217 16:19:33.920784 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:19:34 crc kubenswrapper[4829]: I0217 16:19:34.031822 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="816bca39-deec-496c-bb97-40d4ad4ca878" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.305901 4829 generic.go:334] "Generic (PLEG): container finished" podID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" exitCode=0 Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.305991 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerDied","Data":"3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa"} Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.851613 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.851693 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.855164 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:19:36 crc kubenswrapper[4829]: E0217 16:19:36.182985 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:38 crc kubenswrapper[4829]: E0217 16:19:38.263382 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.821510 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.936645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.936744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.937033 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.937064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.944806 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.969886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4" (OuterVolumeSpecName: "kube-api-access-mc9n4") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "kube-api-access-mc9n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.988823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.029917 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data" (OuterVolumeSpecName: "config-data") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040361 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040390 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040945 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040957 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382545 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"95375bc6f346a6fe6af46463b8db7c53fa38cd84c3783df66e0720a068bc27d4"} Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382853 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382651 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.415367 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.420824 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.428061 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550513 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550607 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550637 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.555810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg" (OuterVolumeSpecName: "kube-api-access-bckkg") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "kube-api-access-bckkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.555901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.580269 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.607317 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data" (OuterVolumeSpecName: "config-data") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.653966 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654212 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654290 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654354 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.408442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerStarted","Data":"56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029"} Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410131 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerDied","Data":"ad768e518034fae299e9c917a36a527e20f09615bf89f800e1faf24578b3afd0"} Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410171 4829 scope.go:117] "RemoveContainer" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410273 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.434380 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" podStartSLOduration=3.361207183 podStartE2EDuration="18.434348814s" podCreationTimestamp="2026-02-17 16:19:23 +0000 UTC" firstStartedPulling="2026-02-17 16:19:25.456873856 +0000 UTC m=+1477.873891844" lastFinishedPulling="2026-02-17 16:19:40.530015497 +0000 UTC m=+1492.947033475" observedRunningTime="2026-02-17 16:19:41.433976715 +0000 UTC m=+1493.850994713" watchObservedRunningTime="2026-02-17 16:19:41.434348814 +0000 UTC m=+1493.851366792" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.465862 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.479312 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:42 crc kubenswrapper[4829]: I0217 16:19:42.291994 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" path="/var/lib/kubelet/pods/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca/volumes" Feb 17 16:19:42 crc kubenswrapper[4829]: I0217 16:19:42.293250 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" path="/var/lib/kubelet/pods/8f1cb833-fb61-463d-a2d4-c14d51370dc9/volumes" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.442825 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" exitCode=137 Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.443149 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029"} Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.641088 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.719187 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720077 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720234 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720385 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720593 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720981 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.721043 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.725822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk" (OuterVolumeSpecName: "kube-api-access-dtcqk") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "kube-api-access-dtcqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.726134 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts" (OuterVolumeSpecName: "scripts") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.775510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.813903 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825015 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825067 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825082 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825094 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825117 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.852810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data" (OuterVolumeSpecName: "config-data") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.926956 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.456294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"b15b8a2c2fe4022bce337bd6c570aad6d1fe85a99014bfa877c56e943e1fb42f"} Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.457041 4829 scope.go:117] "RemoveContainer" containerID="8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.456419 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.504812 4829 scope.go:117] "RemoveContainer" containerID="8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.507607 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.520490 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.549503 4829 scope.go:117] "RemoveContainer" containerID="8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.549694 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550266 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550287 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550307 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="init" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550314 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="init" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550325 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550332 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550342 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550347 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550358 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550363 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550378 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550384 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550399 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550404 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550413 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550418 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550433 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550439 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550454 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550461 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550472 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550478 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550685 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550696 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550707 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550720 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550730 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550745 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550754 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550764 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550771 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550977 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.551194 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.553234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.556360 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.556527 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.566621 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.590054 4829 scope.go:117] "RemoveContainer" containerID="1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643119 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643542 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643771 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643962 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644290 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.759747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.761083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.761298 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760534 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.763854 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.764521 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.765943 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.766783 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.769826 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.780254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.881613 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:45 crc kubenswrapper[4829]: I0217 16:19:45.386260 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:45 crc kubenswrapper[4829]: W0217 16:19:45.391803 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9 WatchSource:0}: Error finding container 2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9: Status 404 returned error can't find the container with id 2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9 Feb 17 16:19:45 crc kubenswrapper[4829]: I0217 16:19:45.473527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9"} Feb 17 16:19:46 crc kubenswrapper[4829]: I0217 16:19:46.298243 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" path="/var/lib/kubelet/pods/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0/volumes" Feb 17 16:19:46 crc kubenswrapper[4829]: I0217 16:19:46.493842 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d"} Feb 17 16:19:46 crc kubenswrapper[4829]: E0217 16:19:46.544613 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:47 crc kubenswrapper[4829]: I0217 16:19:47.574007 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.111306 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.111434 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.579690 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54ddd6397355b84a6538404b7e0b74cacc0798f30ad9a6fdc63f5d6f25040eae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54ddd6397355b84a6538404b7e0b74cacc0798f30ad9a6fdc63f5d6f25040eae/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" to get inode usage: stat /var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log: no such file or directory Feb 17 16:19:48 crc kubenswrapper[4829]: I0217 16:19:48.661952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004"} Feb 17 16:19:49 crc kubenswrapper[4829]: I0217 16:19:49.673950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d"} Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.449325 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod544f59e2-daea-45db-99b4-d9714f620a74"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : Timed out while waiting for systemd to remove kubepods-besteffort-pod544f59e2_daea_45db_99b4_d9714f620a74.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.449650 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : Timed out while waiting for systemd to remove kubepods-besteffort-pod544f59e2_daea_45db_99b4_d9714f620a74.slice" pod="openstack/nova-cell0-db-create-cnfbw" podUID="544f59e2-daea-45db-99b4-d9714f620a74" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.456109 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc909da16-2d5d-4706-adb8-f8402ed9f01e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : Timed out while waiting for systemd to remove kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.456139 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : Timed out while waiting for systemd to remove kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice" pod="openstack/nova-cell1-3357-account-create-update-rg852" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.566646 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddcdf2448-5ccb-4351-b022-de49263fd521"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : Timed out while waiting for systemd to remove kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.566724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : unable to destroy cgroup paths for cgroup [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : Timed out while waiting for systemd to remove kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice" pod="openstack/nova-api-db-create-cglz5" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d"} Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687512 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687586 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687838 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" containerID="cri-o://c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687866 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" containerID="cri-o://7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687889 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" containerID="cri-o://314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687908 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" containerID="cri-o://82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.689690 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.726900 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1724717 podStartE2EDuration="6.726883061s" podCreationTimestamp="2026-02-17 16:19:44 +0000 UTC" firstStartedPulling="2026-02-17 16:19:45.399751066 +0000 UTC m=+1497.816769044" lastFinishedPulling="2026-02-17 16:19:49.954162427 +0000 UTC m=+1502.371180405" observedRunningTime="2026-02-17 16:19:50.715527073 +0000 UTC m=+1503.132545081" watchObservedRunningTime="2026-02-17 16:19:50.726883061 +0000 UTC m=+1503.143901039" Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.708831 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" exitCode=2 Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709178 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" exitCode=0 Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d"} Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709223 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004"} Feb 17 16:19:52 crc kubenswrapper[4829]: I0217 16:19:52.424897 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:52 crc kubenswrapper[4829]: I0217 16:19:52.424969 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:55 crc kubenswrapper[4829]: I0217 16:19:55.753217 4829 generic.go:334] "Generic (PLEG): container finished" podID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerID="56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029" exitCode=0 Feb 17 16:19:55 crc kubenswrapper[4829]: I0217 16:19:55.753300 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerDied","Data":"56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029"} Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.381187 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500810 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500988 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.501111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.513212 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts" (OuterVolumeSpecName: "scripts") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.530869 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8" (OuterVolumeSpecName: "kube-api-access-qxbn8") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "kube-api-access-qxbn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.539180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data" (OuterVolumeSpecName: "config-data") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.551739 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604661 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604702 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604715 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604732 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.784911 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerDied","Data":"3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443"} Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.784966 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.785018 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.998702 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:57 crc kubenswrapper[4829]: E0217 16:19:57.999273 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.999298 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.999633 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.000631 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.007323 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.007437 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wx8s7" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.028071 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234350 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.239954 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.240456 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.253369 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.329307 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.847639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.809437 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f709715-5e80-4988-8eb5-8bebcd673c47","Type":"ContainerStarted","Data":"a5e36fd99e6e1002c5aa09f39496be5e7c16a987518bd8109a7d05cf53f78d75"} Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.809719 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f709715-5e80-4988-8eb5-8bebcd673c47","Type":"ContainerStarted","Data":"01609de051e1a240873448cf104c457d0bf876c7f3f7a4bba0b63795466bbf67"} Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.810289 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.837164 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.837141612 podStartE2EDuration="2.837141612s" podCreationTimestamp="2026-02-17 16:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:59.823730608 +0000 UTC m=+1512.240748576" watchObservedRunningTime="2026-02-17 16:19:59.837141612 +0000 UTC m=+1512.254159610" Feb 17 16:20:00 crc kubenswrapper[4829]: I0217 16:20:00.828351 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" exitCode=0 Feb 17 16:20:00 crc kubenswrapper[4829]: I0217 16:20:00.828433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d"} Feb 17 16:20:04 crc kubenswrapper[4829]: E0217 16:20:04.988133 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/08b1ceb9fd67392961b2a720dc2f4bc336a8a5170c8036f02d370bcb848fc25d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/08b1ceb9fd67392961b2a720dc2f4bc336a8a5170c8036f02d370bcb848fc25d/diff: no such file or directory, extraDiskErr: Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.360241 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.917910 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.919925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.922563 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.922792 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.950885 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040527 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.136556 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.138188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.142746 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.142865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.143044 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.143125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.145058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.178002 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.184506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.185301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.191696 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.192675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.196821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.196892 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.197270 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.219991 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.241259 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.247656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.247829 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.248032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.308134 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.310022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.315854 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.337960 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.339874 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.343536 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354846 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354987 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355107 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355159 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.362869 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.381731 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.385281 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.386029 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.395278 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.405973 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.408200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.424067 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.457826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458100 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458167 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458208 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458329 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458360 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458387 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.462342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.463804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.480342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.492565 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.561951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562308 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562418 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562442 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562490 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562514 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562530 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562597 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562648 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562728 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.563104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.571704 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.579395 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.579956 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.581789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.584370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.585563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.590004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664485 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664531 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664698 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.665292 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667111 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.681809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.772204 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.813260 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.853204 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.876342 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.988189 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.114695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.367276 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.570391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.822622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.839759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.001987 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerStarted","Data":"c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.002038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerStarted","Data":"7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.003917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"21037a41552d2f17b0298eab9cadbade38ca54aa96f604942f870e2e7cef5930"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.006009 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerStarted","Data":"28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.008517 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerStarted","Data":"571dce0f3dca1580b88fc77df97f1e4a84daf42acff7755a8cd9c913181ac9b2"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.011721 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"41f94608f0021132514460f997146b226afa5a638e41f12e1b716a14c00cd14b"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.013090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerStarted","Data":"0a6baf72f36f68b63d71c5c1e9e99dced488541d38aaf0d4ecd5c3f870c08fd3"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.022316 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7l7ns" podStartSLOduration=3.022298034 podStartE2EDuration="3.022298034s" podCreationTimestamp="2026-02-17 16:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:11.016269601 +0000 UTC m=+1523.433287579" watchObservedRunningTime="2026-02-17 16:20:11.022298034 +0000 UTC m=+1523.439316002" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.492491 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.494695 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.498147 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.498383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.542624 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626559 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729533 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.735620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.735767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.762276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.772153 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.821329 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:12 crc kubenswrapper[4829]: I0217 16:20:12.032478 4829 generic.go:334] "Generic (PLEG): container finished" podID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerID="916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404" exitCode=0 Feb 17 16:20:12 crc kubenswrapper[4829]: I0217 16:20:12.032753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404"} Feb 17 16:20:13 crc kubenswrapper[4829]: I0217 16:20:13.275351 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:13 crc kubenswrapper[4829]: I0217 16:20:13.286835 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.483651 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.882829 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.889915 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.093026 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.093067 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.094857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerStarted","Data":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098652 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" containerID="cri-o://559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098619 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" containerID="cri-o://82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.103019 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.103111 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerStarted","Data":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.105993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerStarted","Data":"035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.106034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerStarted","Data":"428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.109621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerStarted","Data":"09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.109789 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.127118 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.51111202 podStartE2EDuration="6.127099311s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.414766114 +0000 UTC m=+1522.831784092" lastFinishedPulling="2026-02-17 16:20:14.030753405 +0000 UTC m=+1526.447771383" observedRunningTime="2026-02-17 16:20:15.113816282 +0000 UTC m=+1527.530834260" watchObservedRunningTime="2026-02-17 16:20:15.127099311 +0000 UTC m=+1527.544117299" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.147382 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.364402747 podStartE2EDuration="6.147362919s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.139949033 +0000 UTC m=+1522.556967011" lastFinishedPulling="2026-02-17 16:20:13.922909205 +0000 UTC m=+1526.339927183" observedRunningTime="2026-02-17 16:20:15.136788213 +0000 UTC m=+1527.553806191" watchObservedRunningTime="2026-02-17 16:20:15.147362919 +0000 UTC m=+1527.564380897" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.158386 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.007402597 podStartE2EDuration="6.158369507s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.856802453 +0000 UTC m=+1523.273820431" lastFinishedPulling="2026-02-17 16:20:14.007769363 +0000 UTC m=+1526.424787341" observedRunningTime="2026-02-17 16:20:15.149154488 +0000 UTC m=+1527.566172456" watchObservedRunningTime="2026-02-17 16:20:15.158369507 +0000 UTC m=+1527.575387485" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.182541 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" podStartSLOduration=4.182522282 podStartE2EDuration="4.182522282s" podCreationTimestamp="2026-02-17 16:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:15.167014601 +0000 UTC m=+1527.584032579" watchObservedRunningTime="2026-02-17 16:20:15.182522282 +0000 UTC m=+1527.599540260" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.184432 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.858654951 podStartE2EDuration="6.184423253s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.597089661 +0000 UTC m=+1523.014107639" lastFinishedPulling="2026-02-17 16:20:13.922857963 +0000 UTC m=+1526.339875941" observedRunningTime="2026-02-17 16:20:15.179053648 +0000 UTC m=+1527.596071626" watchObservedRunningTime="2026-02-17 16:20:15.184423253 +0000 UTC m=+1527.601441231" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.226822 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" podStartSLOduration=6.226805161 podStartE2EDuration="6.226805161s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:15.19391268 +0000 UTC m=+1527.610930658" watchObservedRunningTime="2026-02-17 16:20:15.226805161 +0000 UTC m=+1527.643823139" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.078374 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.121669 4829 generic.go:334] "Generic (PLEG): container finished" podID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" exitCode=0 Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.121700 4829 generic.go:334] "Generic (PLEG): container finished" podID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" exitCode=143 Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122148 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"41f94608f0021132514460f997146b226afa5a638e41f12e1b716a14c00cd14b"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122229 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122376 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142098 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142213 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142442 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.146254 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs" (OuterVolumeSpecName: "logs") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.150251 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.174727 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.191810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z" (OuterVolumeSpecName: "kube-api-access-qdz4z") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "kube-api-access-qdz4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.195518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.228822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data" (OuterVolumeSpecName: "config-data") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252056 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252717 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252778 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.362966 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.363422 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.363464 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} err="failed to get container status \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.363489 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.364005 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364059 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} err="failed to get container status \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364086 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364460 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} err="failed to get container status \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364480 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364852 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} err="failed to get container status \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.446958 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.457253 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.474447 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.474981 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475000 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.475030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475037 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475249 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475283 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.476526 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.478668 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.479060 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.490879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560220 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560876 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.561063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662793 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662819 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.664738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.675345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.676653 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.679190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.689962 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.796127 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:17 crc kubenswrapper[4829]: I0217 16:20:17.522009 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.159890 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.160390 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"c4d90ff6dc961ef3104c3f1654909960f94137d701493b08670847050b615a45"} Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.324887 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" path="/var/lib/kubelet/pods/81822b2e-5592-4ac6-bf30-c8a3f97d7128/volumes" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.173954 4829 generic.go:334] "Generic (PLEG): container finished" podID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerID="c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447" exitCode=0 Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.174043 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerDied","Data":"c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447"} Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.177079 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.240739 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.240713876 podStartE2EDuration="3.240713876s" podCreationTimestamp="2026-02-17 16:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:19.230602573 +0000 UTC m=+1531.647620551" watchObservedRunningTime="2026-02-17 16:20:19.240713876 +0000 UTC m=+1531.657731894" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.590661 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.590957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.625942 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.773647 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.773735 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.853938 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.879038 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.975038 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.975307 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" containerID="cri-o://28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" gracePeriod=10 Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.212370 4829 generic.go:334] "Generic (PLEG): container finished" podID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerID="28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" exitCode=0 Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.212654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2"} Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.240210 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.241650 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.258806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.266147 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.278836 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.283548 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.292382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.319704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374916 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374968 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477436 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.480330 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.481756 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.497873 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.498285 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.584339 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.623536 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.861934 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.862423 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.877999 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.898859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990406 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990448 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990529 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990560 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990611 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990703 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029265 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts" (OuterVolumeSpecName: "scripts") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029394 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7" (OuterVolumeSpecName: "kube-api-access-rh5d7") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "kube-api-access-rh5d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h" (OuterVolumeSpecName: "kube-api-access-fg94h") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "kube-api-access-fg94h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: W0217 16:20:21.067753 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81822b2e_5592_4ac6_bf30_c8a3f97d7128.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81822b2e_5592_4ac6_bf30_c8a3f97d7128.slice: no such file or directory Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093801 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093830 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093842 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.098939 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.130626 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data" (OuterVolumeSpecName: "config-data") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.133302 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.163431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config" (OuterVolumeSpecName: "config") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.190982 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197399 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197423 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197434 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197444 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.212204 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.244925 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.301094 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.301123 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.315778 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: E0217 16:20:21.331545 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-conmon-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:21 crc kubenswrapper[4829]: E0217 16:20:21.334655 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-conmon-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.361798 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" exitCode=137 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.361891 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365440 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerDied","Data":"7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365475 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365538 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.385857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerStarted","Data":"2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.404118 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.410174 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.413720 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.413787 4829 scope.go:117] "RemoveContainer" containerID="28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.424847 4829 generic.go:334] "Generic (PLEG): container finished" podID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerID="7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" exitCode=137 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.425632 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerDied","Data":"7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.467850 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.488959 4829 scope.go:117] "RemoveContainer" containerID="a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.498602 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518367 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518565 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" containerID="cri-o://bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518789 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" containerID="cri-o://15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.566730 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.566955 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" containerID="cri-o://960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.567256 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" containerID="cri-o://9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.580476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.603400 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.608961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609059 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609200 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609241 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609376 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.620042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.620336 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.627174 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd" (OuterVolumeSpecName: "kube-api-access-k67zd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "kube-api-access-k67zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.627366 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts" (OuterVolumeSpecName: "scripts") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.634466 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712376 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712668 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712678 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712690 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.765653 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.774901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.796699 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.796806 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.816003 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.870697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data" (OuterVolumeSpecName: "config-data") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.870782 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919810 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919987 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.920629 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.920648 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.934893 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.934926 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm" (OuterVolumeSpecName: "kube-api-access-85jpm") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "kube-api-access-85jpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.957082 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.010852 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data" (OuterVolumeSpecName: "config-data") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022366 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022390 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022400 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022409 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.303356 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.315614 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" path="/var/lib/kubelet/pods/08208ef6-e99c-4f83-952c-5828df9b7af8/volumes" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.424968 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.425018 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.429229 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.430584 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.431072 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" gracePeriod=600 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439726 4829 generic.go:334] "Generic (PLEG): container finished" podID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerID="ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerDied","Data":"ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439836 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439885 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.440009 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.441192 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs" (OuterVolumeSpecName: "logs") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.443752 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.454859 4829 generic.go:334] "Generic (PLEG): container finished" podID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerID="1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.454924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerDied","Data":"1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.455028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerStarted","Data":"273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469268 4829 generic.go:334] "Generic (PLEG): container finished" podID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469296 4829 generic.go:334] "Generic (PLEG): container finished" podID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" exitCode=143 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469353 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469379 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"c4d90ff6dc961ef3104c3f1654909960f94137d701493b08670847050b615a45"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469776 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469915 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.475844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr" (OuterVolumeSpecName: "kube-api-access-mmwjr") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "kube-api-access-mmwjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.477222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerDied","Data":"0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.477317 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.491460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.491692 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.497671 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" exitCode=143 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.498325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.505350 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data" (OuterVolumeSpecName: "config-data") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.513741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546781 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546820 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546836 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.557801 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.562695 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.671173 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.687037 4829 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.763499 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.763885 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.766216 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} err="failed to get container status \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.767147 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.769535 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.769705 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} err="failed to get container status \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.772010 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774143 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} err="failed to get container status \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774193 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774538 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} err="failed to get container status \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774552 4829 scope.go:117] "RemoveContainer" containerID="7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.814787 4829 scope.go:117] "RemoveContainer" containerID="314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.818905 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.845752 4829 scope.go:117] "RemoveContainer" containerID="7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.854710 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.869428 4829 scope.go:117] "RemoveContainer" containerID="82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.896753 4829 scope.go:117] "RemoveContainer" containerID="c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897015 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897501 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="init" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897517 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="init" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897546 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897552 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897567 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897586 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897602 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897607 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897619 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897625 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897634 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897655 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897663 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897670 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897684 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897690 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897702 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897707 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897919 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897936 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897946 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897958 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897968 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897982 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897996 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.898008 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.898019 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.900112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.904278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.904375 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.913129 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.925996 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.941019 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.961892 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.983883 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.996313 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.998806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.000874 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.001506 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.008378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.048920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049031 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049097 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049291 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049727 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.151841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.151988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152067 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152087 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153556 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153609 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153663 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.156242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.157184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.157351 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.159914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.160781 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.170151 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.181975 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.234422 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255845 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255907 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.256110 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.256202 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.257106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.259365 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.262504 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.265060 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.270532 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.316644 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555039 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" exitCode=0 Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555430 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" containerID="cri-o://370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" gracePeriod=30 Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555536 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555980 4829 scope.go:117] "RemoveContainer" containerID="1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.557477 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:23 crc kubenswrapper[4829]: E0217 16:20:23.558030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.805611 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.952077 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.101093 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.125867 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.182460 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.182696 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.183339 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17cc49ce-4e47-470a-ad6b-a4127308a7e4" (UID: "17cc49ce-4e47-470a-ad6b-a4127308a7e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.183809 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.188757 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k" (OuterVolumeSpecName: "kube-api-access-ssj5k") pod "17cc49ce-4e47-470a-ad6b-a4127308a7e4" (UID: "17cc49ce-4e47-470a-ad6b-a4127308a7e4"). InnerVolumeSpecName "kube-api-access-ssj5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.286355 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"38fcc02f-9122-4ea6-bb0e-ef135805c127\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.286606 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"38fcc02f-9122-4ea6-bb0e-ef135805c127\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.287459 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.290752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38fcc02f-9122-4ea6-bb0e-ef135805c127" (UID: "38fcc02f-9122-4ea6-bb0e-ef135805c127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.290947 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp" (OuterVolumeSpecName: "kube-api-access-mvwtp") pod "38fcc02f-9122-4ea6-bb0e-ef135805c127" (UID: "38fcc02f-9122-4ea6-bb0e-ef135805c127"). InnerVolumeSpecName "kube-api-access-mvwtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.313258 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" path="/var/lib/kubelet/pods/14067e2a-e82f-44fb-a2df-5b2627647d4c/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.314370 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" path="/var/lib/kubelet/pods/288faaff-8af6-4b89-aa56-5789d3b28b37/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.315133 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" path="/var/lib/kubelet/pods/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.392489 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.392557 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.590914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerDied","Data":"273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.590987 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.591089 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.592940 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.594564 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.599942 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.600079 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601752 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601765 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"7f678395f28b403dc65226210aa2f82c7e9fac520b66b5fae571b8af46a56688"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.606926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.606986 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"3f1143368869422a684a872f85799e4eab53674e7f6171e067b82963a2f8f099"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.609250 4829 generic.go:334] "Generic (PLEG): container finished" podID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerID="035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83" exitCode=0 Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.609326 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerDied","Data":"035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.614973 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerDied","Data":"2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.615010 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.615012 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.644256 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.644229528 podStartE2EDuration="2.644229528s" podCreationTimestamp="2026-02-17 16:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:24.627753892 +0000 UTC m=+1537.044771950" watchObservedRunningTime="2026-02-17 16:20:24.644229528 +0000 UTC m=+1537.061247516" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.630080 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.702755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:25 crc kubenswrapper[4829]: E0217 16:20:25.703290 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703303 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: E0217 16:20:25.703323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703328 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703586 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703617 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.704423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708072 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708416 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708426 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.709280 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.747317 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.845909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.845988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.846079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.846119 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.951914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952068 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959203 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959534 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.979228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.070616 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.099048 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159696 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159844 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.167986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v" (OuterVolumeSpecName: "kube-api-access-zrc6v") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "kube-api-access-zrc6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.212419 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts" (OuterVolumeSpecName: "scripts") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.240012 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data" (OuterVolumeSpecName: "config-data") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.244787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262806 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262838 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262849 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262859 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.643621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerDied","Data":"428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e"} Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646366 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646416 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.725178 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:26 crc kubenswrapper[4829]: E0217 16:20:26.726184 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.726203 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.726426 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.727366 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.729972 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.748621 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.763630 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.775309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.775839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.776053 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.878861 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.878979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.879083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.884149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.884200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.903743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.121262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.446651 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.493962 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.494412 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.494453 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.528019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx" (OuterVolumeSpecName: "kube-api-access-gvzjx") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "kube-api-access-gvzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.568780 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data" (OuterVolumeSpecName: "config-data") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.593742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602156 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602202 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602216 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.635883 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703228 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703360 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.714784 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs" (OuterVolumeSpecName: "logs") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722035 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" exitCode=0 Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722121 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"21037a41552d2f17b0298eab9cadbade38ca54aa96f604942f870e2e7cef5930"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722136 4829 scope.go:117] "RemoveContainer" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722274 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.723914 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj" (OuterVolumeSpecName: "kube-api-access-lf9zj") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "kube-api-access-lf9zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.752258 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.765134 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerStarted","Data":"a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.766797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data" (OuterVolumeSpecName: "config-data") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771795 4829 generic.go:334] "Generic (PLEG): container finished" podID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" exitCode=0 Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771844 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerDied","Data":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771875 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerDied","Data":"0a6baf72f36f68b63d71c5c1e9e99dced488541d38aaf0d4ecd5c3f870c08fd3"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771954 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.805766 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806176 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806189 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806198 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.811297 4829 scope.go:117] "RemoveContainer" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.842985 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.870028 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882086 4829 scope.go:117] "RemoveContainer" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882231 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.882891 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.882955 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882965 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.883006 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883016 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883314 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883343 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883371 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.884456 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.887234 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.887554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.892124 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": container with ID starting with 15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4 not found: ID does not exist" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.892189 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} err="failed to get container status \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": rpc error: code = NotFound desc = could not find container \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": container with ID starting with 15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4 not found: ID does not exist" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.892252 4829 scope.go:117] "RemoveContainer" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.898872 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": container with ID starting with bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0 not found: ID does not exist" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.898918 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} err="failed to get container status \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": rpc error: code = NotFound desc = could not find container \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": container with ID starting with bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0 not found: ID does not exist" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.898946 4829 scope.go:117] "RemoveContainer" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910495 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.935801 4829 scope.go:117] "RemoveContainer" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.936481 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": container with ID starting with 370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c not found: ID does not exist" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.936531 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} err="failed to get container status \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": rpc error: code = NotFound desc = could not find container \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": container with ID starting with 370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c not found: ID does not exist" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012625 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012787 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.019027 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.019073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.041106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.078313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.103063 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.131742 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.147107 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.148965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.151423 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.161613 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.220752 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232853 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232943 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334745 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334898 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.335456 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.352407 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.355291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.356050 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.360272 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" path="/var/lib/kubelet/pods/f6e04e6e-a14a-40dc-8938-14c25fe5b775/volumes" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.361219 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" path="/var/lib/kubelet/pods/fcc83a9a-ecb1-46dd-be33-145b81792b63/volumes" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.362287 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.362391 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.424622 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.805179 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"abe67602-ae51-43a0-b450-af654c573d9a","Type":"ContainerStarted","Data":"3077716e588c41a44507c07c5de41c5d7d6babfb3e348a3cb7fef8e4bbd70e1a"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.142798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.231971 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.822410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"abe67602-ae51-43a0-b450-af654c573d9a","Type":"ContainerStarted","Data":"51b71a30ce15c56c718cb73e47a02d807264cff0e06a64a34ed6fc7686b8e02a"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.822691 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.828038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.829043 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.830352 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerStarted","Data":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.830374 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerStarted","Data":"c52c06fac7bbd9c26185cdf4701a182bdfd4bd0e4897e4f1d991aa5849c43671"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.831917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.831940 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"622a936aec57e0c945ae7671635046510015465545d885452898518495289721"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.852493 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.8524567530000002 podStartE2EDuration="3.852456753s" podCreationTimestamp="2026-02-17 16:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:29.838902066 +0000 UTC m=+1542.255920044" watchObservedRunningTime="2026-02-17 16:20:29.852456753 +0000 UTC m=+1542.269474761" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.881312 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.881294804 podStartE2EDuration="2.881294804s" podCreationTimestamp="2026-02-17 16:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:29.854047586 +0000 UTC m=+1542.271065564" watchObservedRunningTime="2026-02-17 16:20:29.881294804 +0000 UTC m=+1542.298312782" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.902486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.106170176 podStartE2EDuration="7.902466897s" podCreationTimestamp="2026-02-17 16:20:22 +0000 UTC" firstStartedPulling="2026-02-17 16:20:23.81111725 +0000 UTC m=+1536.228135228" lastFinishedPulling="2026-02-17 16:20:28.607413971 +0000 UTC m=+1541.024431949" observedRunningTime="2026-02-17 16:20:29.878061156 +0000 UTC m=+1542.295079134" watchObservedRunningTime="2026-02-17 16:20:29.902466897 +0000 UTC m=+1542.319484875" Feb 17 16:20:30 crc kubenswrapper[4829]: I0217 16:20:30.843182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} Feb 17 16:20:30 crc kubenswrapper[4829]: I0217 16:20:30.873888 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.873875061 podStartE2EDuration="2.873875061s" podCreationTimestamp="2026-02-17 16:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:30.863271883 +0000 UTC m=+1543.280289861" watchObservedRunningTime="2026-02-17 16:20:30.873875061 +0000 UTC m=+1543.290893039" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.222520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.316884 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.316947 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.332759 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.332781 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.928415 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerStarted","Data":"42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354"} Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.962498 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-89gpt" podStartSLOduration=2.301000037 podStartE2EDuration="9.962474809s" podCreationTimestamp="2026-02-17 16:20:25 +0000 UTC" firstStartedPulling="2026-02-17 16:20:26.771330294 +0000 UTC m=+1539.188348272" lastFinishedPulling="2026-02-17 16:20:34.432805066 +0000 UTC m=+1546.849823044" observedRunningTime="2026-02-17 16:20:34.948916121 +0000 UTC m=+1547.365934109" watchObservedRunningTime="2026-02-17 16:20:34.962474809 +0000 UTC m=+1547.379492787" Feb 17 16:20:36 crc kubenswrapper[4829]: I0217 16:20:36.279619 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:36 crc kubenswrapper[4829]: E0217 16:20:36.280251 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.165608 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.975012 4829 generic.go:334] "Generic (PLEG): container finished" podID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerID="42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354" exitCode=0 Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.975089 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerDied","Data":"42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354"} Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.222136 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.277037 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.426175 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.426235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.077462 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.490841 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.509865 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.510250 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552083 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552171 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552204 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.560548 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h" (OuterVolumeSpecName: "kube-api-access-wj88h") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "kube-api-access-wj88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.564216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts" (OuterVolumeSpecName: "scripts") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.590823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data" (OuterVolumeSpecName: "config-data") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.593630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655484 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655530 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655543 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655557 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.034860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerDied","Data":"a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64"} Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.035218 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.034907 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.835430 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:40 crc kubenswrapper[4829]: E0217 16:20:40.835989 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.836003 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.836201 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.838308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.846970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.847164 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.847225 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.851879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988192 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091283 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091474 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.096491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.099526 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.111017 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.111415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.171029 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.721467 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:42 crc kubenswrapper[4829]: I0217 16:20:42.057420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"c4afff1a2ba6d2a5ca1bb51c6475f556a5d2736c3b4ec308f87e7a0a06dccc60"} Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.078830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.322357 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.333489 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.339231 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.394806 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395121 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" containerID="cri-o://b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395259 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" containerID="cri-o://89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395302 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" containerID="cri-o://9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395335 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" containerID="cri-o://e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.405388 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.734479 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.097860 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" exitCode=0 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098479 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" exitCode=2 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098543 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" exitCode=0 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.097939 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098693 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.112652 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:20:45 crc kubenswrapper[4829]: I0217 16:20:45.112915 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.153208 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fcc02f_9122_4ea6_bb0e_ef135805c127.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fcc02f_9122_4ea6_bb0e_ef135805c127.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.153781 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cc49ce_4e47_470a_ad6b_a4127308a7e4.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cc49ce_4e47_470a_ad6b_a4127308a7e4.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.172256 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288faaff_8af6_4b89_aa56_5789d3b28b37.slice/crio-9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f.scope WatchSource:0}: Error finding container 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f: Status 404 returned error can't find the container with id 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.172505 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288faaff_8af6_4b89_aa56_5789d3b28b37.slice/crio-960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a.scope WatchSource:0}: Error finding container 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a: Status 404 returned error can't find the container with id 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.178216 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc89e689f_68fd_4357_a2a0_1d4b8d130702.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc89e689f_68fd_4357_a2a0_1d4b8d130702.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.277049 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.277601 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda05ad89_4eff_401a_9006_935800aab7d9.slice/crio-7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.284564 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda05ad89_4eff_401a_9006_935800aab7d9.slice/crio-7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.075597 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.134862 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136573 4829 generic.go:334] "Generic (PLEG): container finished" podID="da05ad89-4eff-401a-9006-935800aab7d9" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" exitCode=137 Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136700 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136715 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerDied","Data":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136776 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerDied","Data":"571dce0f3dca1580b88fc77df97f1e4a84daf42acff7755a8cd9c913181ac9b2"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136800 4829 scope.go:117] "RemoveContainer" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.160880 4829 scope.go:117] "RemoveContainer" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: E0217 16:20:46.161334 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": container with ID starting with 7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd not found: ID does not exist" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.161379 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} err="failed to get container status \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": rpc error: code = NotFound desc = could not find container \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": container with ID starting with 7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd not found: ID does not exist" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208410 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208663 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208756 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.215611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5" (OuterVolumeSpecName: "kube-api-access-bs5q5") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "kube-api-access-bs5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.247472 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data" (OuterVolumeSpecName: "config-data") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.253222 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311243 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311272 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311281 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.463873 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.475820 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.490455 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: E0217 16:20:46.491018 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.491036 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.491255 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.492077 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.494020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.495725 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.498862 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.507986 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620672 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620717 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.621195 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723904 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.724021 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.724106 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.731044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.733420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.734172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.745965 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.746510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.809791 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:47 crc kubenswrapper[4829]: I0217 16:20:47.406092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.019498 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171263 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" exitCode=0 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171402 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"3f1143368869422a684a872f85799e4eab53674e7f6171e067b82963a2f8f099"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171450 4829 scope.go:117] "RemoveContainer" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171602 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.176712 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044","Type":"ContainerStarted","Data":"dcfb31c558debe06e87a6975cd538adbc1f28025b77622dd134a53ec2f462af8"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.176986 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044","Type":"ContainerStarted","Data":"785e9ac7c74b47df9879880dd011fc9def07c1669535efc483de5a1372e3fc5e"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180182 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" containerID="cri-o://41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180717 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" containerID="cri-o://eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180778 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" containerID="cri-o://0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180841 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" containerID="cri-o://25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211659 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211794 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211865 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211991 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212110 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212282 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212109 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.214013 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.215098 4829 scope.go:117] "RemoveContainer" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.216178 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.220214 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7" (OuterVolumeSpecName: "kube-api-access-6hkr7") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "kube-api-access-6hkr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.225119 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts" (OuterVolumeSpecName: "scripts") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.231407 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.231386904 podStartE2EDuration="2.231386904s" podCreationTimestamp="2026-02-17 16:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:48.193900435 +0000 UTC m=+1560.610918413" watchObservedRunningTime="2026-02-17 16:20:48.231386904 +0000 UTC m=+1560.648404882" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.237526 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.501004793 podStartE2EDuration="8.237510039s" podCreationTimestamp="2026-02-17 16:20:40 +0000 UTC" firstStartedPulling="2026-02-17 16:20:41.723238491 +0000 UTC m=+1554.140256469" lastFinishedPulling="2026-02-17 16:20:47.459743737 +0000 UTC m=+1559.876761715" observedRunningTime="2026-02-17 16:20:48.212220428 +0000 UTC m=+1560.629238406" watchObservedRunningTime="2026-02-17 16:20:48.237510039 +0000 UTC m=+1560.654528017" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.294943 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.295245 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.309689 4829 scope.go:117] "RemoveContainer" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.317765 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da05ad89-4eff-401a-9006-935800aab7d9" path="/var/lib/kubelet/pods/da05ad89-4eff-401a-9006-935800aab7d9/volumes" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.317883 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.318875 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.318957 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.332833 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.358790 4829 scope.go:117] "RemoveContainer" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.362510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.379304 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data" (OuterVolumeSpecName: "config-data") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.388550 4829 scope.go:117] "RemoveContainer" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389017 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": container with ID starting with 89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4 not found: ID does not exist" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389055 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} err="failed to get container status \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": rpc error: code = NotFound desc = could not find container \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": container with ID starting with 89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389079 4829 scope.go:117] "RemoveContainer" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389369 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": container with ID starting with 9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9 not found: ID does not exist" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389394 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} err="failed to get container status \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": rpc error: code = NotFound desc = could not find container \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": container with ID starting with 9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389415 4829 scope.go:117] "RemoveContainer" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389705 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": container with ID starting with e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238 not found: ID does not exist" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389731 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} err="failed to get container status \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": rpc error: code = NotFound desc = could not find container \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": container with ID starting with e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389747 4829 scope.go:117] "RemoveContainer" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.390009 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": container with ID starting with b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75 not found: ID does not exist" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.390034 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} err="failed to get container status \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": rpc error: code = NotFound desc = could not find container \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": container with ID starting with b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424500 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424531 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424542 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.429959 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.430508 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.431649 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.434399 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.512184 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.532626 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.554150 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555026 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555049 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555079 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555089 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555117 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555126 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555147 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555155 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555458 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555489 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555510 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555538 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.558657 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.566606 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.575857 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.576092 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633076 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633166 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633202 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633419 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633648 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735761 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736291 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736542 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736637 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.737104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.737357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.740558 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.740627 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.741866 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.743035 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.763297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.012881 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.232951 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233203 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233212 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.234232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.234246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.235162 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.238731 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.437697 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.454929 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.502135 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582004 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582299 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582391 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.673649 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.685028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.685859 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.695910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699729 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.711768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.782058 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.249200 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"4f38d7e9c21e5a5bb4aa4283aef17c56de184252a9a841ed16ca27e145f9895d"} Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.293510 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" path="/var/lib/kubelet/pods/0bda35ab-f2ff-46ac-8733-76b7df307990/volumes" Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.295129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:50 crc kubenswrapper[4829]: W0217 16:20:50.296628 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fdb8e01_6d92_47be_a6a8_4d2e39d42152.slice/crio-9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee WatchSource:0}: Error finding container 9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee: Status 404 returned error can't find the container with id 9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.259721 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262670 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerID="d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68" exitCode=0 Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262766 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerStarted","Data":"9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.811327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.099477 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.274714 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerStarted","Data":"5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.274798 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276329 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" containerID="cri-o://a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" gracePeriod=30 Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276382 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" containerID="cri-o://3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" gracePeriod=30 Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.331892 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" podStartSLOduration=3.33187514 podStartE2EDuration="3.33187514s" podCreationTimestamp="2026-02-17 16:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:52.296628531 +0000 UTC m=+1564.713646509" watchObservedRunningTime="2026-02-17 16:20:52.33187514 +0000 UTC m=+1564.748893118" Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.291530 4829 generic.go:334] "Generic (PLEG): container finished" podID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" exitCode=143 Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.293061 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} Feb 17 16:20:53 crc kubenswrapper[4829]: E0217 16:20:53.519572 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.688975 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.304764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.305006 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.340640 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.880608074 podStartE2EDuration="6.340611236s" podCreationTimestamp="2026-02-17 16:20:48 +0000 UTC" firstStartedPulling="2026-02-17 16:20:49.705673199 +0000 UTC m=+1562.122691177" lastFinishedPulling="2026-02-17 16:20:53.165676351 +0000 UTC m=+1565.582694339" observedRunningTime="2026-02-17 16:20:54.327786631 +0000 UTC m=+1566.744804609" watchObservedRunningTime="2026-02-17 16:20:54.340611236 +0000 UTC m=+1566.757629234" Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315068 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" containerID="cri-o://432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315098 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" containerID="cri-o://71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315114 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" containerID="cri-o://954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315158 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" containerID="cri-o://3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" gracePeriod=30 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.061844 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226457 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226508 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226631 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.227476 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs" (OuterVolumeSpecName: "logs") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.234915 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z" (OuterVolumeSpecName: "kube-api-access-42x7z") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "kube-api-access-42x7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.291251 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data" (OuterVolumeSpecName: "config-data") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.307739 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329510 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329545 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329556 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329564 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337377 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337404 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" exitCode=2 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337412 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337486 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346668 4829 generic.go:334] "Generic (PLEG): container finished" podID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346757 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"622a936aec57e0c945ae7671635046510015465545d885452898518495289721"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346777 4829 scope.go:117] "RemoveContainer" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346782 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.375671 4829 scope.go:117] "RemoveContainer" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.399469 4829 scope.go:117] "RemoveContainer" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.399680 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.400018 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": container with ID starting with 3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2 not found: ID does not exist" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400057 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} err="failed to get container status \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": rpc error: code = NotFound desc = could not find container \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": container with ID starting with 3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2 not found: ID does not exist" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400080 4829 scope.go:117] "RemoveContainer" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.400399 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": container with ID starting with a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5 not found: ID does not exist" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400421 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} err="failed to get container status \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": rpc error: code = NotFound desc = could not find container \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": container with ID starting with a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5 not found: ID does not exist" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.420670 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438282 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.438881 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438898 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.438918 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438925 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.439158 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.439194 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.440524 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444413 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444644 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444786 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.453504 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.636809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738739 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738829 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738867 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738925 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.739233 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743137 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743599 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.757469 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.757978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.801449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.811308 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.847950 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.319452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:57 crc kubenswrapper[4829]: W0217 16:20:57.326816 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae839887_6e18_4062_bf65_95cef31fdd49.slice/crio-6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6 WatchSource:0}: Error finding container 6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6: Status 404 returned error can't find the container with id 6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6 Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.410599 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6"} Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.429876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:57 crc kubenswrapper[4829]: E0217 16:20:57.542506 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.615341 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.617400 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.621382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.622166 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.648469 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671529 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.773555 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.773967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.774095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.774172 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.783503 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.783533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.792276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.792501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.915952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.190913 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.302914 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" path="/var/lib/kubelet/pods/29ec0e6f-a70b-414f-880d-59dec9878ff0/volumes" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.390839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391265 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391493 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.394060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.395016 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.403069 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw" (OuterVolumeSpecName: "kube-api-access-v6kxw") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "kube-api-access-v6kxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.411072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts" (OuterVolumeSpecName: "scripts") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.439774 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445841 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" exitCode=0 Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445895 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445922 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"4f38d7e9c21e5a5bb4aa4283aef17c56de184252a9a841ed16ca27e145f9895d"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445938 4829 scope.go:117] "RemoveContainer" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.446071 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.464307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.464844 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.493378 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.493359418 podStartE2EDuration="2.493359418s" podCreationTimestamp="2026-02-17 16:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:58.490759168 +0000 UTC m=+1570.907777146" watchObservedRunningTime="2026-02-17 16:20:58.493359418 +0000 UTC m=+1570.910377396" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495085 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495115 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495124 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495136 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495144 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.540690 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.581743 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data" (OuterVolumeSpecName: "config-data") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.599313 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.599348 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.698555 4829 scope.go:117] "RemoveContainer" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.731024 4829 scope.go:117] "RemoveContainer" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.755711 4829 scope.go:117] "RemoveContainer" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808265 4829 scope.go:117] "RemoveContainer" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.808652 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": container with ID starting with 3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7 not found: ID does not exist" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808697 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} err="failed to get container status \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": rpc error: code = NotFound desc = could not find container \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": container with ID starting with 3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808722 4829 scope.go:117] "RemoveContainer" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.808947 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": container with ID starting with 71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1 not found: ID does not exist" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808982 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} err="failed to get container status \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": rpc error: code = NotFound desc = could not find container \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": container with ID starting with 71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808998 4829 scope.go:117] "RemoveContainer" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.809244 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": container with ID starting with 954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d not found: ID does not exist" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809289 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} err="failed to get container status \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": rpc error: code = NotFound desc = could not find container \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": container with ID starting with 954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809317 4829 scope.go:117] "RemoveContainer" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.809596 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": container with ID starting with 432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472 not found: ID does not exist" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809622 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} err="failed to get container status \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": rpc error: code = NotFound desc = could not find container \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": container with ID starting with 432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.817630 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.832701 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.852803 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853527 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853545 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853571 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853590 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853603 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853609 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853635 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853863 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853885 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853907 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853918 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.856020 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.859773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.859930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.867103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.881242 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.008888 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.008965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009110 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009444 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009524 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113108 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113618 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113651 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.118010 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.118278 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.121164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.121342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.132931 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.233165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.490107 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerStarted","Data":"162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4"} Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.490799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerStarted","Data":"512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7"} Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.509749 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8dvtl" podStartSLOduration=2.509732204 podStartE2EDuration="2.509732204s" podCreationTimestamp="2026-02-17 16:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:59.507520314 +0000 UTC m=+1571.924538292" watchObservedRunningTime="2026-02-17 16:20:59.509732204 +0000 UTC m=+1571.926750182" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.776110 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.785744 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.881662 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.882149 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" containerID="cri-o://09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" gracePeriod=10 Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.314228 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" path="/var/lib/kubelet/pods/8527b72c-dacf-4126-9b7b-06a0294d6ac0/volumes" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.513860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.513902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"917e80d190c9f417c6d7ad24e1ab772a0f50f28f3fab4aadaa2a3c83b5714c95"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515616 4829 generic.go:334] "Generic (PLEG): container finished" podID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerID="09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" exitCode=0 Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515890 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515904 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.584929 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779629 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779670 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779816 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779880 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.780017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.792302 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl" (OuterVolumeSpecName: "kube-api-access-dmtxl") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "kube-api-access-dmtxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.850949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.851884 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config" (OuterVolumeSpecName: "config") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.858551 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883803 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883832 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883843 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883853 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.889013 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.912202 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.985370 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.985409 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.528011 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.528186 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.587829 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.598212 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:21:02 crc kubenswrapper[4829]: I0217 16:21:02.291240 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" path="/var/lib/kubelet/pods/52a2d626-5ff1-4f8c-80d1-3b90906b5a96/volumes" Feb 17 16:21:02 crc kubenswrapper[4829]: I0217 16:21:02.551101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} Feb 17 16:21:03 crc kubenswrapper[4829]: I0217 16:21:03.280489 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:03 crc kubenswrapper[4829]: E0217 16:21:03.280823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.579831 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.580442 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.581973 4829 generic.go:334] "Generic (PLEG): container finished" podID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerID="162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4" exitCode=0 Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.582014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerDied","Data":"162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4"} Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.610832 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4473001180000002 podStartE2EDuration="6.610811071s" podCreationTimestamp="2026-02-17 16:20:58 +0000 UTC" firstStartedPulling="2026-02-17 16:20:59.786039654 +0000 UTC m=+1572.203057632" lastFinishedPulling="2026-02-17 16:21:03.949550607 +0000 UTC m=+1576.366568585" observedRunningTime="2026-02-17 16:21:04.607097001 +0000 UTC m=+1577.024115019" watchObservedRunningTime="2026-02-17 16:21:04.610811071 +0000 UTC m=+1577.027829049" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.105213 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130611 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130689 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.131074 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.139002 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts" (OuterVolumeSpecName: "scripts") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.159024 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv" (OuterVolumeSpecName: "kube-api-access-w4qcv") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "kube-api-access-w4qcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.201734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.214172 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data" (OuterVolumeSpecName: "config-data") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235734 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235767 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235776 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235785 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609049 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerDied","Data":"512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7"} Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609096 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609175 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.801896 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.803994 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.849782 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.891892 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.892226 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" containerID="cri-o://953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" gracePeriod=30 Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.892295 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" containerID="cri-o://027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" gracePeriod=30 Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.917239 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.917508 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" containerID="cri-o://2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" gracePeriod=30 Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.625189 4829 generic.go:334] "Generic (PLEG): container finished" podID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerID="953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" exitCode=143 Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.626222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80"} Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.813786 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.814277 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:07 crc kubenswrapper[4829]: E0217 16:21:07.987178 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.223195 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.223928 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.224321 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.224388 4829 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.251853 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.276059 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.381852 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.382033 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.382506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.389946 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb" (OuterVolumeSpecName: "kube-api-access-pwrqb") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "kube-api-access-pwrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.418037 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data" (OuterVolumeSpecName: "config-data") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.423503 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485439 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485587 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485998 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644327 4829 generic.go:334] "Generic (PLEG): container finished" podID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" exitCode=0 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644388 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerDied","Data":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644421 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerDied","Data":"c52c06fac7bbd9c26185cdf4701a182bdfd4bd0e4897e4f1d991aa5849c43671"} Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644479 4829 scope.go:117] "RemoveContainer" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644809 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" containerID="cri-o://20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" gracePeriod=30 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644907 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" containerID="cri-o://717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" gracePeriod=30 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.689724 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.713492 4829 scope.go:117] "RemoveContainer" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.713915 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": container with ID starting with 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 not found: ID does not exist" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.713983 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} err="failed to get container status \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": rpc error: code = NotFound desc = could not find container \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": container with ID starting with 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 not found: ID does not exist" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.741227 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.760377 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764456 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764484 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764555 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764564 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764597 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764605 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764618 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="init" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764625 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="init" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765237 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765262 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765310 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.766268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.770142 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.779103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.894953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.895257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.895481 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997813 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.003074 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.004313 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.021192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.133789 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.666396 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.668826 4829 generic.go:334] "Generic (PLEG): container finished" podID="ae839887-6e18-4062-bf65-95cef31fdd49" containerID="20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" exitCode=143 Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.668904 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa"} Feb 17 16:21:09 crc kubenswrapper[4829]: W0217 16:21:09.677544 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d63bbb_2d26_4b85_8241_2785a5194a21.slice/crio-b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f WatchSource:0}: Error finding container b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f: Status 404 returned error can't find the container with id b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.076851 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:52662->10.217.0.245:8775: read: connection reset by peer" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.076920 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:52648->10.217.0.245:8775: read: connection reset by peer" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.308597 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" path="/var/lib/kubelet/pods/0b803a04-fbc0-4844-aa4f-b8302c15024f/volumes" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.684458 4829 generic.go:334] "Generic (PLEG): container finished" podID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerID="027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" exitCode=0 Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.684540 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.686301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37d63bbb-2d26-4b85-8241-2785a5194a21","Type":"ContainerStarted","Data":"8b9f6eae650b9b2b5280896b488f52a730430d9a560030e5a10b92062d67d42d"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.686344 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37d63bbb-2d26-4b85-8241-2785a5194a21","Type":"ContainerStarted","Data":"b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.712684 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.712659774 podStartE2EDuration="2.712659774s" podCreationTimestamp="2026-02-17 16:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:10.707446663 +0000 UTC m=+1583.124464671" watchObservedRunningTime="2026-02-17 16:21:10.712659774 +0000 UTC m=+1583.129677762" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.380854 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565861 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565981 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.566012 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.567012 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs" (OuterVolumeSpecName: "logs") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.573781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb" (OuterVolumeSpecName: "kube-api-access-rljnb") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "kube-api-access-rljnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.601458 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data" (OuterVolumeSpecName: "config-data") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.606608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.647514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670103 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670150 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670166 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670179 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670194 4829 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.713916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"7f678395f28b403dc65226210aa2f82c7e9fac520b66b5fae571b8af46a56688"} Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.713934 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.714262 4829 scope.go:117] "RemoveContainer" containerID="027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.763022 4829 scope.go:117] "RemoveContainer" containerID="953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.769319 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.788968 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.802380 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: E0217 16:21:11.802930 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.802948 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: E0217 16:21:11.802995 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803003 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803226 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803256 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.804474 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.808014 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.808192 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.837675 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890500 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993179 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993267 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993559 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.994293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.999172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.999322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.000307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.008500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.125479 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.322964 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" path="/var/lib/kubelet/pods/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea/volumes" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.614766 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:12 crc kubenswrapper[4829]: W0217 16:21:12.617146 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0afa824_7a82_41cc_9274_28689e2f3f57.slice/crio-91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6 WatchSource:0}: Error finding container 91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6: Status 404 returned error can't find the container with id 91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6 Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.724602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.738696 4829 generic.go:334] "Generic (PLEG): container finished" podID="ae839887-6e18-4062-bf65-95cef31fdd49" containerID="717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" exitCode=0 Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.738786 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.739317 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.739333 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.741938 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"830d5ac2e08e914204172ecc65baba07c733cd6fbd5a56e924f7eb7be6317787"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.741988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"1d551bd5742f917f6e1b515eb133fdfc160b96b6b92de9274b9d3485cd2697f0"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.769864 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.769842388 podStartE2EDuration="2.769842388s" podCreationTimestamp="2026-02-17 16:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:13.761831803 +0000 UTC m=+1586.178849781" watchObservedRunningTime="2026-02-17 16:21:13.769842388 +0000 UTC m=+1586.186860366" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.800240 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868747 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868906 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869056 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869125 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869173 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs" (OuterVolumeSpecName: "logs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869736 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.870446 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.874587 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq" (OuterVolumeSpecName: "kube-api-access-zd5nq") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "kube-api-access-zd5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.910019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data" (OuterVolumeSpecName: "config-data") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.918359 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.942750 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.962287 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973148 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973188 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973201 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973215 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973228 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.134499 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.755110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.784420 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.805023 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.821560 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: E0217 16:21:14.822329 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822360 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: E0217 16:21:14.822418 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822433 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822888 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822915 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.825002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845007 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845463 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.853806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897599 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897675 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897739 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007367 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007887 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007981 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008006 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008121 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.009977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.023891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.028138 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.028257 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.042094 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.046264 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.183149 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: W0217 16:21:15.686016 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62d7182c_e529_468f_8022_9fd5fc66b554.slice/crio-4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570 WatchSource:0}: Error finding container 4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570: Status 404 returned error can't find the container with id 4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570 Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.692280 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.766899 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.281516 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:16 crc kubenswrapper[4829]: E0217 16:21:16.282144 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.308972 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" path="/var/lib/kubelet/pods/ae839887-6e18-4062-bf65-95cef31fdd49/volumes" Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.783702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"a8d97ed8c6afd6807abc872f429f98f5cb7e62719b360704b2aaa301cc509d46"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.783761 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"ba380e909e775b3fbd3bc14cdd75dda2ae285393e17cad1bd3158821c5f992d0"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.826666 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.826647073 podStartE2EDuration="2.826647073s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:16.814364512 +0000 UTC m=+1589.231382500" watchObservedRunningTime="2026-02-17 16:21:16.826647073 +0000 UTC m=+1589.243665051" Feb 17 16:21:17 crc kubenswrapper[4829]: I0217 16:21:17.126369 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:21:17 crc kubenswrapper[4829]: I0217 16:21:17.126699 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:21:18 crc kubenswrapper[4829]: E0217 16:21:18.316930 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.759596 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807177 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807286 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807521 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.831833 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts" (OuterVolumeSpecName: "scripts") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837734 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" exitCode=137 Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837782 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837813 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"c4afff1a2ba6d2a5ca1bb51c6475f556a5d2736c3b4ec308f87e7a0a06dccc60"} Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837833 4829 scope.go:117] "RemoveContainer" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837911 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.862730 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg" (OuterVolumeSpecName: "kube-api-access-hj6sg") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "kube-api-access-hj6sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.912425 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.912466 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.985726 4829 scope.go:117] "RemoveContainer" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.046421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.108747 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data" (OuterVolumeSpecName: "config-data") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.109619 4829 scope.go:117] "RemoveContainer" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.127810 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.127845 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.135131 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.154652 4829 scope.go:117] "RemoveContainer" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.225641 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.237240 4829 scope.go:117] "RemoveContainer" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241536 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.241661 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": container with ID starting with 0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6 not found: ID does not exist" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241724 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} err="failed to get container status \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": rpc error: code = NotFound desc = could not find container \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": container with ID starting with 0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6 not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241751 4829 scope.go:117] "RemoveContainer" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.252352 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": container with ID starting with eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af not found: ID does not exist" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252566 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} err="failed to get container status \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": rpc error: code = NotFound desc = could not find container \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": container with ID starting with eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252674 4829 scope.go:117] "RemoveContainer" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252626 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.263332 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": container with ID starting with 25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64 not found: ID does not exist" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.263538 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} err="failed to get container status \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": rpc error: code = NotFound desc = could not find container \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": container with ID starting with 25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64 not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.263654 4829 scope.go:117] "RemoveContainer" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.271831 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": container with ID starting with 41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a not found: ID does not exist" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.271873 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} err="failed to get container status \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": rpc error: code = NotFound desc = could not find container \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": container with ID starting with 41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.281821 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282285 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282297 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282307 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282313 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282345 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282350 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282554 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282589 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282602 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282614 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.285156 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296261 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296459 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296567 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296593 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296761 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.330108 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332585 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332967 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.333054 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435236 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435277 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.445755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.447530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.457980 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.458056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.460740 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.473840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.645165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.886897 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:21:20 crc kubenswrapper[4829]: W0217 16:21:20.145863 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d7c5e4_0195_41e6_afd9_9f31d6472d61.slice/crio-a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891 WatchSource:0}: Error finding container a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891: Status 404 returned error can't find the container with id a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891 Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.159550 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.294595 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aced48a-e424-4579-a0f3-681531606707" path="/var/lib/kubelet/pods/0aced48a-e424-4579-a0f3-681531606707/volumes" Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.863405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891"} Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.126780 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.127353 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.891397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"70b2d242e7e123ca0465bb9778178ee3ee64a382e5d26bb2eaf1c75482b55605"} Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.142924 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0afa824-7a82-41cc-9274-28689e2f3f57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.142997 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0afa824-7a82-41cc-9274-28689e2f3f57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:23 crc kubenswrapper[4829]: E0217 16:21:23.491504 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.905108 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"4ea447f5414056a4f47899ccee039a39288b7ce44013f7f5a59b1248929852e3"} Feb 17 16:21:24 crc kubenswrapper[4829]: I0217 16:21:24.941674 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"377114032ef56a4ca0f06c429fb23a5271744cdc92228b8cdbcfc86338e02444"} Feb 17 16:21:24 crc kubenswrapper[4829]: I0217 16:21:24.942239 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"0bac6265fd29394b09a25f49ceca7d9bf6cc526664a5709395333282e748b99f"} Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.022327 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.808055622 podStartE2EDuration="6.022295281s" podCreationTimestamp="2026-02-17 16:21:19 +0000 UTC" firstStartedPulling="2026-02-17 16:21:20.149693846 +0000 UTC m=+1592.566711844" lastFinishedPulling="2026-02-17 16:21:24.363933535 +0000 UTC m=+1596.780951503" observedRunningTime="2026-02-17 16:21:25.013801793 +0000 UTC m=+1597.430819781" watchObservedRunningTime="2026-02-17 16:21:25.022295281 +0000 UTC m=+1597.439313269" Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.183667 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.183718 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:26 crc kubenswrapper[4829]: I0217 16:21:26.194728 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62d7182c-e529-468f-8022-9fd5fc66b554" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:26 crc kubenswrapper[4829]: I0217 16:21:26.194737 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62d7182c-e529-468f-8022-9fd5fc66b554" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:27 crc kubenswrapper[4829]: I0217 16:21:27.279945 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:27 crc kubenswrapper[4829]: E0217 16:21:27.280380 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:28 crc kubenswrapper[4829]: E0217 16:21:28.653406 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:29 crc kubenswrapper[4829]: I0217 16:21:29.254142 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.140668 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.142890 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.149197 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.159235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.032383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.032867 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" containerID="cri-o://1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" gracePeriod=30 Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.206101 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.206348 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" containerID="cri-o://310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" gracePeriod=30 Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.675648 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.839192 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"2003bd16-d251-4004-9eca-9e47fb54e514\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.849323 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk" (OuterVolumeSpecName: "kube-api-access-n4pdk") pod "2003bd16-d251-4004-9eca-9e47fb54e514" (UID: "2003bd16-d251-4004-9eca-9e47fb54e514"). InnerVolumeSpecName "kube-api-access-n4pdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.934140 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.944643 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046399 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046806 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046833 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.055033 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8" (OuterVolumeSpecName: "kube-api-access-w6tr8") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "kube-api-access-w6tr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076010 4829 generic.go:334] "Generic (PLEG): container finished" podID="2003bd16-d251-4004-9eca-9e47fb54e514" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" exitCode=2 Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerDied","Data":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerDied","Data":"f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076133 4829 scope.go:117] "RemoveContainer" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076272 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.080678 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082880 4829 generic.go:334] "Generic (PLEG): container finished" podID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" exitCode=2 Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082923 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082928 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerDied","Data":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082962 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerDied","Data":"16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.122199 4829 scope.go:117] "RemoveContainer" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.124480 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": container with ID starting with 1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4 not found: ID does not exist" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.124541 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} err="failed to get container status \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": rpc error: code = NotFound desc = could not find container \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": container with ID starting with 1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4 not found: ID does not exist" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.124592 4829 scope.go:117] "RemoveContainer" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.126119 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.146453 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data" (OuterVolumeSpecName: "config-data") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149689 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149717 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149727 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.162503 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.198497 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.200055 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.201655 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203011 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.203606 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203620 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.203642 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203648 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.204235 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.204280 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.205599 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.208021 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.209692 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.212856 4829 scope.go:117] "RemoveContainer" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.215055 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": container with ID starting with 310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088 not found: ID does not exist" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.215094 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} err="failed to get container status \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": rpc error: code = NotFound desc = could not find container \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": container with ID starting with 310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088 not found: ID does not exist" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.224373 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.228321 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.361486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.362327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.362521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.363007 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.417916 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.433327 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.447125 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.449112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.452284 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.452490 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.464876 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465984 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.472730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.474020 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.484876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.487862 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.567782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568399 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568552 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568671 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.590268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.675097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.675627 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.676022 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.697324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.889635 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.096015 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:36 crc kubenswrapper[4829]: W0217 16:21:36.105024 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57285ef_f362_4fb7_8f6c_633698507b3d.slice/crio-ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484 WatchSource:0}: Error finding container ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484: Status 404 returned error can't find the container with id ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.112991 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.120341 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.295039 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" path="/var/lib/kubelet/pods/2003bd16-d251-4004-9eca-9e47fb54e514/volumes" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.296032 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" path="/var/lib/kubelet/pods/b4cfa907-6caa-41a9-b86a-371fd960e471/volumes" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362311 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362596 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" containerID="cri-o://a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362656 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" containerID="cri-o://24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362703 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" containerID="cri-o://c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362664 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" containerID="cri-o://26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.406758 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.130266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e39a0dce-4da5-4ff4-9e50-e2dc41d22092","Type":"ContainerStarted","Data":"c11694e0707d2732fd1be5cd70d589074588b1a7d6ac63ffb9700e8c895bdf4b"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.130317 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e39a0dce-4da5-4ff4-9e50-e2dc41d22092","Type":"ContainerStarted","Data":"d23a30b732d3550e7f4fd9d33de0bb2e06d49f52f74bf2c1f1b70b86fa8d355f"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135107 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" exitCode=0 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135153 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" exitCode=2 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135162 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" exitCode=0 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135223 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135259 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.137672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f57285ef-f362-4fb7-8f6c-633698507b3d","Type":"ContainerStarted","Data":"c269891f6d51656027160994fcc1575421835dc5b64fd93373cd5c08654cab89"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.137734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f57285ef-f362-4fb7-8f6c-633698507b3d","Type":"ContainerStarted","Data":"ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.159975 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.705210744 podStartE2EDuration="2.159950738s" podCreationTimestamp="2026-02-17 16:21:35 +0000 UTC" firstStartedPulling="2026-02-17 16:21:36.416792478 +0000 UTC m=+1608.833810456" lastFinishedPulling="2026-02-17 16:21:36.871532472 +0000 UTC m=+1609.288550450" observedRunningTime="2026-02-17 16:21:37.149686942 +0000 UTC m=+1609.566704940" watchObservedRunningTime="2026-02-17 16:21:37.159950738 +0000 UTC m=+1609.576968716" Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.200966 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.732849518 podStartE2EDuration="2.200943381s" podCreationTimestamp="2026-02-17 16:21:35 +0000 UTC" firstStartedPulling="2026-02-17 16:21:36.108037875 +0000 UTC m=+1608.525055843" lastFinishedPulling="2026-02-17 16:21:36.576131728 +0000 UTC m=+1608.993149706" observedRunningTime="2026-02-17 16:21:37.174039427 +0000 UTC m=+1609.591057405" watchObservedRunningTime="2026-02-17 16:21:37.200943381 +0000 UTC m=+1609.617961359" Feb 17 16:21:38 crc kubenswrapper[4829]: I0217 16:21:38.148894 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:21:38 crc kubenswrapper[4829]: E0217 16:21:38.501907 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:38 crc kubenswrapper[4829]: E0217 16:21:38.731828 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.146547 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.192663 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" exitCode=0 Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.195873 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"917e80d190c9f417c6d7ad24e1ab772a0f50f28f3fab4aadaa2a3c83b5714c95"} Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196790 4829 scope.go:117] "RemoveContainer" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.256242 4829 scope.go:117] "RemoveContainer" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269324 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269684 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269745 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269820 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.271501 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.271823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.277877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts" (OuterVolumeSpecName: "scripts") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.277976 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn" (OuterVolumeSpecName: "kube-api-access-96skn") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "kube-api-access-96skn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.280279 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.280877 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.288865 4829 scope.go:117] "RemoveContainer" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.311781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375681 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375904 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375916 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375924 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375932 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.380524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.406149 4829 scope.go:117] "RemoveContainer" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.442355 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data" (OuterVolumeSpecName: "config-data") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444229 4829 scope.go:117] "RemoveContainer" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.444807 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": container with ID starting with 24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d not found: ID does not exist" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444848 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} err="failed to get container status \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": rpc error: code = NotFound desc = could not find container \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": container with ID starting with 24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444873 4829 scope.go:117] "RemoveContainer" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445196 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": container with ID starting with 26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66 not found: ID does not exist" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445229 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} err="failed to get container status \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": rpc error: code = NotFound desc = could not find container \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": container with ID starting with 26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445249 4829 scope.go:117] "RemoveContainer" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445483 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": container with ID starting with c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1 not found: ID does not exist" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445505 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} err="failed to get container status \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": rpc error: code = NotFound desc = could not find container \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": container with ID starting with c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445516 4829 scope.go:117] "RemoveContainer" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445817 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": container with ID starting with a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297 not found: ID does not exist" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445840 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} err="failed to get container status \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": rpc error: code = NotFound desc = could not find container \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": container with ID starting with a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.477931 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.477993 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.535383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.558618 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.571745 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572318 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572335 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572349 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572355 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572378 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572385 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572402 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572408 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572633 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572657 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572674 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572686 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.574745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.577841 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.578029 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.580262 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.590673 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681954 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681975 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682395 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682483 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682726 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784264 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784387 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.785189 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.787061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789284 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789314 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.793746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.793747 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.806845 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.893638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:40 crc kubenswrapper[4829]: I0217 16:21:40.295743 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" path="/var/lib/kubelet/pods/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2/volumes" Feb 17 16:21:40 crc kubenswrapper[4829]: I0217 16:21:40.430998 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:41 crc kubenswrapper[4829]: I0217 16:21:41.221198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} Feb 17 16:21:41 crc kubenswrapper[4829]: I0217 16:21:41.222190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"d8d11c7e5bc799f3b0a7fe14e7081721edd114e2dc2bdd16476077b9f7c7412d"} Feb 17 16:21:42 crc kubenswrapper[4829]: I0217 16:21:42.261879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} Feb 17 16:21:43 crc kubenswrapper[4829]: I0217 16:21:43.278472 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.304384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.304768 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.331122 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.73760782 podStartE2EDuration="6.331100676s" podCreationTimestamp="2026-02-17 16:21:39 +0000 UTC" firstStartedPulling="2026-02-17 16:21:40.432397758 +0000 UTC m=+1612.849415736" lastFinishedPulling="2026-02-17 16:21:44.025890614 +0000 UTC m=+1616.442908592" observedRunningTime="2026-02-17 16:21:45.326173363 +0000 UTC m=+1617.743191341" watchObservedRunningTime="2026-02-17 16:21:45.331100676 +0000 UTC m=+1617.748118654" Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.606152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.256620 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.256828 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.349756 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7f836c5e6c4dc8ae142ea06fb1094515b55e687113f4883084160fc00bddb596/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7f836c5e6c4dc8ae142ea06fb1094515b55e687113f4883084160fc00bddb596/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_aodh-0_0aced48a-e424-4579-a0f3-681531606707/aodh-api/0.log" to get inode usage: stat /var/log/pods/openstack_aodh-0_0aced48a-e424-4579-a0f3-681531606707/aodh-api/0.log: no such file or directory Feb 17 16:21:51 crc kubenswrapper[4829]: I0217 16:21:51.280393 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:51 crc kubenswrapper[4829]: E0217 16:21:51.281539 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:04 crc kubenswrapper[4829]: I0217 16:22:04.279669 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:04 crc kubenswrapper[4829]: E0217 16:22:04.280790 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:09 crc kubenswrapper[4829]: I0217 16:22:09.908215 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:22:17 crc kubenswrapper[4829]: I0217 16:22:17.280995 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:17 crc kubenswrapper[4829]: E0217 16:22:17.283797 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:21 crc kubenswrapper[4829]: I0217 16:22:21.977500 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:22:21 crc kubenswrapper[4829]: I0217 16:22:21.999409 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.013637 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.025378 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.044157 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.046432 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.054319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125695 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125831 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228370 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.234706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.235412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.260304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.293982 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" path="/var/lib/kubelet/pods/79d3ed60-8c68-44ec-aaa1-806b5aec5df1/volumes" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.295061 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" path="/var/lib/kubelet/pods/c89e689f-68fd-4357-a2a0-1d4b8d130702/volumes" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.371188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.981638 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111791 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111854 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111997 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.113333 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:23 crc kubenswrapper[4829]: I0217 16:22:23.796811 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:23 crc kubenswrapper[4829]: I0217 16:22:23.871339 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qptzd" event={"ID":"a7091b35-889b-422b-aead-117292847a8a","Type":"ContainerStarted","Data":"b2493eae309be4cd73f62f5acf506639f826fdfee8d1c7942d3e2c20faed1b14"} Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.873396 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219381 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219748 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" containerID="cri-o://77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219894 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" containerID="cri-o://508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219944 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" containerID="cri-o://0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219984 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" containerID="cri-o://99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883118 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" exitCode=0 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883402 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" exitCode=2 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883411 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" exitCode=0 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883526 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} Feb 17 16:22:24 crc kubenswrapper[4829]: E0217 16:22:24.885158 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.899371 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:28 crc kubenswrapper[4829]: I0217 16:22:28.298673 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:28 crc kubenswrapper[4829]: E0217 16:22:28.299546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:28 crc kubenswrapper[4829]: I0217 16:22:28.386261 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" containerID="cri-o://6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" gracePeriod=604796 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.724704 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" containerID="cri-o://1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" gracePeriod=604796 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.827725 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937512 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937732 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937869 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937917 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937955 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938121 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938210 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938211 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.939652 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.939685 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.943700 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw" (OuterVolumeSpecName: "kube-api-access-dqrvw") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "kube-api-access-dqrvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.944697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts" (OuterVolumeSpecName: "scripts") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.945939 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" exitCode=0 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.945987 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946022 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"d8d11c7e5bc799f3b0a7fe14e7081721edd114e2dc2bdd16476077b9f7c7412d"} Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946047 4829 scope.go:117] "RemoveContainer" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946058 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.977289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.026232 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.041219 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043676 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043709 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043756 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043770 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043778 4829 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.056743 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data" (OuterVolumeSpecName: "config-data") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.089684 4829 scope.go:117] "RemoveContainer" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.117185 4829 scope.go:117] "RemoveContainer" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.144644 4829 scope.go:117] "RemoveContainer" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.147485 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.173738 4829 scope.go:117] "RemoveContainer" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.174283 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": container with ID starting with 508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab not found: ID does not exist" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.174327 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} err="failed to get container status \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": rpc error: code = NotFound desc = could not find container \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": container with ID starting with 508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.174357 4829 scope.go:117] "RemoveContainer" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.174886 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": container with ID starting with 0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936 not found: ID does not exist" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.175363 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} err="failed to get container status \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": rpc error: code = NotFound desc = could not find container \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": container with ID starting with 0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936 not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.175797 4829 scope.go:117] "RemoveContainer" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.176708 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": container with ID starting with 99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c not found: ID does not exist" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.176744 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} err="failed to get container status \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": rpc error: code = NotFound desc = could not find container \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": container with ID starting with 99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.176767 4829 scope.go:117] "RemoveContainer" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.177115 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": container with ID starting with 77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181 not found: ID does not exist" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.177131 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} err="failed to get container status \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": rpc error: code = NotFound desc = could not find container \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": container with ID starting with 77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181 not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.328307 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.348412 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.367757 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368419 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368440 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368458 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368467 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368486 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368496 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368526 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368534 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368815 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368849 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368873 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368890 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.371521 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.374813 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.375036 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.375211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.402605 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453568 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453625 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453650 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453690 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453778 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453796 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556317 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556440 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557085 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557138 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557250 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557448 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557906 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.561057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.561510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.562519 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.564297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.575373 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.575742 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.720544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:31 crc kubenswrapper[4829]: W0217 16:22:31.304942 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01f505e_09de_4b7d_ae8a_b9f392c3b592.slice/crio-ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7 WatchSource:0}: Error finding container ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7: Status 404 returned error can't find the container with id ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7 Feb 17 16:22:31 crc kubenswrapper[4829]: I0217 16:22:31.309862 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.424960 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.425022 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.425159 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:31 crc kubenswrapper[4829]: I0217 16:22:31.974695 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7"} Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.312886 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" path="/var/lib/kubelet/pods/4fe2d3ad-54aa-4d5c-b875-2683ed774353/volumes" Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.997907 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"cbe778ccec508c84598a4abeef47ed9a0768c53d6ccce4ed245fb45058a970d7"} Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.998202 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"786e109818c6005753b8c470c8e72a7b694be9c3948e59c5789ef8477a177bc4"} Feb 17 16:22:34 crc kubenswrapper[4829]: E0217 16:22:34.453666 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.029081 4829 generic.go:334] "Generic (PLEG): container finished" podID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerID="6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" exitCode=0 Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.029187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00"} Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.033140 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"6363ba2128e84ecbd3d2bf246f5413ef29b9ca0801b406d6dbbf538246845237"} Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.033338 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:22:35 crc kubenswrapper[4829]: E0217 16:22:35.034855 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.114792 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.182597 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.182645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.183298 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186674 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186743 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186798 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186840 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186975 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187020 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187041 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187063 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187599 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.188393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.189366 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf" (OuterVolumeSpecName: "kube-api-access-n8ndf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "kube-api-access-n8ndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.189832 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.206928 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.207067 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.207758 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info" (OuterVolumeSpecName: "pod-info") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.230216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data" (OuterVolumeSpecName: "config-data") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.249274 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33" (OuterVolumeSpecName: "persistence") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294358 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294407 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") on node \"crc\" " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294420 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294430 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294439 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294450 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294460 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294470 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.295693 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf" (OuterVolumeSpecName: "server-conf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.319852 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.358954 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.359100 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33") on node "crc" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.391393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397088 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397127 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397140 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.058352 4829 generic.go:334] "Generic (PLEG): container finished" podID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerID="1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" exitCode=0 Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.058454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d"} Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062758 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062803 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e"} Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062887 4829 scope.go:117] "RemoveContainer" containerID="6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.066152 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.091538 4829 scope.go:117] "RemoveContainer" containerID="b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.128630 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.140543 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.166077 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.168026 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168095 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.168106 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="setup-container" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168113 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="setup-container" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168709 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.170646 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.205106 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.322524 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.334081 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" path="/var/lib/kubelet/pods/257c3943-bfcb-409b-a915-bacfd95d9c93/volumes" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.345942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346195 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350721 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350759 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350984 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.351021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395662 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395720 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395830 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.398945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453825 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453849 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453874 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453905 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.454004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.454106 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.455617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.455646 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.457654 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.457909 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.460228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.463168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.463607 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.465064 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.465859 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.466562 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.466661 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cec88d4327ff12753cbf1d7636d4616ad5b51e6f71f7c68ee07d08bc8a1cc1e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.479716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.538320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.567739 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761021 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761544 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762523 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763648 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763678 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763741 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763776 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763848 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763967 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.764822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info" (OuterVolumeSpecName: "pod-info") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765067 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765083 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765164 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.766752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.767190 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.772868 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk" (OuterVolumeSpecName: "kube-api-access-d5wnk") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "kube-api-access-d5wnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.774759 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.794289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4" (OuterVolumeSpecName: "persistence") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.812489 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data" (OuterVolumeSpecName: "config-data") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.830215 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.844158 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf" (OuterVolumeSpecName: "server-conf") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870087 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870119 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870180 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870288 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") on node \"crc\" " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870307 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870319 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870358 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870369 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.894899 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.946970 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.947330 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4") on node "crc" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.972476 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.972510 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb"} Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074690 4829 scope.go:117] "RemoveContainer" containerID="1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074629 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.111686 4829 scope.go:117] "RemoveContainer" containerID="6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.135723 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.143427 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.160620 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: E0217 16:22:37.161182 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161201 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: E0217 16:22:37.161221 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="setup-container" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161229 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="setup-container" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161465 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.173101 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.179878 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.179924 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180116 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180268 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9x5xf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180368 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181121 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181132 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.278768 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279181 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279352 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279400 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279517 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.308751 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:37 crc kubenswrapper[4829]: W0217 16:22:37.308824 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13860a28_5cd6_4bf9_b60b_3872c76444a8.slice/crio-6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6 WatchSource:0}: Error finding container 6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6: Status 404 returned error can't find the container with id 6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6 Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382275 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382291 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382385 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382473 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383051 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383065 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.384433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.385710 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.385736 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c712c179c4211caeb2d08f251b409f456d9a156c71e8c917f92effa050520833/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.387122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.387926 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.388336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.389304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.406154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.437750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.576901 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.091826 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6"} Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.143745 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:38 crc kubenswrapper[4829]: W0217 16:22:38.146142 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6b5337_789c_48a9_b772_3d96b64640e6.slice/crio-2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3 WatchSource:0}: Error finding container 2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3: Status 404 returned error can't find the container with id 2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3 Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.312546 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" path="/var/lib/kubelet/pods/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d/volumes" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.507645 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.512544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.520125 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.548648 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.611850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.611927 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612039 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612067 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612173 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.713633 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.713945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714058 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714093 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714118 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714539 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715075 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715143 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715821 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.716208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.746442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.842701 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.103282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3"} Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.106221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28"} Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.336146 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:39 crc kubenswrapper[4829]: W0217 16:22:39.339562 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9656ce3d_4ce5_4e96_8d26_ceb6f4e27a99.slice/crio-6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77 WatchSource:0}: Error finding container 6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77: Status 404 returned error can't find the container with id 6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77 Feb 17 16:22:40 crc kubenswrapper[4829]: I0217 16:22:40.123286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} Feb 17 16:22:40 crc kubenswrapper[4829]: I0217 16:22:40.123641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.138281 4829 generic.go:334] "Generic (PLEG): container finished" podID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" exitCode=0 Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.138385 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.140537 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.281038 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:41 crc kubenswrapper[4829]: E0217 16:22:41.281441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:42 crc kubenswrapper[4829]: I0217 16:22:42.166232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} Feb 17 16:22:42 crc kubenswrapper[4829]: I0217 16:22:42.205447 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" podStartSLOduration=4.205417991 podStartE2EDuration="4.205417991s" podCreationTimestamp="2026-02-17 16:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:42.18980869 +0000 UTC m=+1674.606826718" watchObservedRunningTime="2026-02-17 16:22:42.205417991 +0000 UTC m=+1674.622435979" Feb 17 16:22:43 crc kubenswrapper[4829]: I0217 16:22:43.181722 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:47 crc kubenswrapper[4829]: I0217 16:22:47.296829 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422386 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422486 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422745 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.424154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:48 crc kubenswrapper[4829]: E0217 16:22:48.260074 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.844800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.935906 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.936409 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" containerID="cri-o://5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" gracePeriod=10 Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.146153 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.148265 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.162418 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.292526 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerID="5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" exitCode=0 Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.292818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e"} Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304324 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406322 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406547 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.407453 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.407965 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408551 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.409617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.430985 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.473479 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.708093 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814533 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814622 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814673 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814781 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.828327 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4" (OuterVolumeSpecName: "kube-api-access-jvqs4") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "kube-api-access-jvqs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.894413 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.917364 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.917396 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.927332 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config" (OuterVolumeSpecName: "config") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.933831 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.940478 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.953144 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019922 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019961 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019978 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019991 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.160813 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.318104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerStarted","Data":"658c262b31dab8fa64bda70171117a69b0cd30958700a8f068147d23f7aff478"} Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.321678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee"} Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.321760 4829 scope.go:117] "RemoveContainer" containerID="5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.322001 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.380255 4829 scope.go:117] "RemoveContainer" containerID="d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.415818 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.428302 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:51 crc kubenswrapper[4829]: E0217 16:22:51.280895 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:51 crc kubenswrapper[4829]: I0217 16:22:51.338103 4829 generic.go:334] "Generic (PLEG): container finished" podID="de1b2a48-73a6-48b7-94d8-1c24530f4d2b" containerID="4e9bde6d42e9871da8ffb869aabe5aeb3dbe328d0f62ce7ae655427b1a6286b9" exitCode=0 Feb 17 16:22:51 crc kubenswrapper[4829]: I0217 16:22:51.338207 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerDied","Data":"4e9bde6d42e9871da8ffb869aabe5aeb3dbe328d0f62ce7ae655427b1a6286b9"} Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.281215 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:52 crc kubenswrapper[4829]: E0217 16:22:52.282469 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.307825 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" path="/var/lib/kubelet/pods/3fdb8e01-6d92-47be-a6a8-4d2e39d42152/volumes" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.365649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerStarted","Data":"63ebe057e4e9114ce7c31db34d9d9fec65c3a33829164d5a3068de5b975ecd60"} Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.365827 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.400383 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" podStartSLOduration=3.40035772 podStartE2EDuration="3.40035772s" podCreationTimestamp="2026-02-17 16:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:52.393192377 +0000 UTC m=+1684.810210395" watchObservedRunningTime="2026-02-17 16:22:52.40035772 +0000 UTC m=+1684.817375738" Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.476891 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.566871 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.568348 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" containerID="cri-o://f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" gracePeriod=10 Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.236451 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.336911 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.336958 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337062 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337080 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337153 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337220 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337327 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.347828 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz" (OuterVolumeSpecName: "kube-api-access-lt6vz") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "kube-api-access-lt6vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.396686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config" (OuterVolumeSpecName: "config") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.405878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.415296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.420876 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.431475 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.432125 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441363 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441424 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441440 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441452 4829 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441464 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441492 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441504 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494339 4829 generic.go:334] "Generic (PLEG): container finished" podID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494388 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.495620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77"} Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.495638 4829 scope.go:117] "RemoveContainer" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.533095 4829 scope.go:117] "RemoveContainer" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.541076 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.554423 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.555659 4829 scope.go:117] "RemoveContainer" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: E0217 16:23:00.556152 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": container with ID starting with f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4 not found: ID does not exist" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556199 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} err="failed to get container status \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": rpc error: code = NotFound desc = could not find container \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": container with ID starting with f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4 not found: ID does not exist" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556296 4829 scope.go:117] "RemoveContainer" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: E0217 16:23:00.556818 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": container with ID starting with 2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9 not found: ID does not exist" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556862 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} err="failed to get container status \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": rpc error: code = NotFound desc = could not find container \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": container with ID starting with 2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9 not found: ID does not exist" Feb 17 16:23:02 crc kubenswrapper[4829]: E0217 16:23:02.283258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:02 crc kubenswrapper[4829]: I0217 16:23:02.300920 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" path="/var/lib/kubelet/pods/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99/volumes" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422066 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422601 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422853 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.424472 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:06 crc kubenswrapper[4829]: I0217 16:23:06.280092 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:06 crc kubenswrapper[4829]: E0217 16:23:06.281113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:11 crc kubenswrapper[4829]: I0217 16:23:11.638924 4829 generic.go:334] "Generic (PLEG): container finished" podID="13860a28-5cd6-4bf9-b60b-3872c76444a8" containerID="d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28" exitCode=0 Feb 17 16:23:11 crc kubenswrapper[4829]: I0217 16:23:11.639154 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerDied","Data":"d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.658375 4829 generic.go:334] "Generic (PLEG): container finished" podID="4c6b5337-789c-48a9-b772-3d96b64640e6" containerID="2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855" exitCode=0 Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.658545 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerDied","Data":"2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.661644 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"17958486db1f8626286073b7193b9fc9f2a71fed07c7a02278e530d40fb15d7e"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.661901 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.735315 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=36.73530077 podStartE2EDuration="36.73530077s" podCreationTimestamp="2026-02-17 16:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:12.730493399 +0000 UTC m=+1705.147511387" watchObservedRunningTime="2026-02-17 16:23:12.73530077 +0000 UTC m=+1705.152318738" Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.681154 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"0bd32012d7a00b558d50fae45ce486aee73bc59eb9fb23789c1ad852bd5e7305"} Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.681616 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.716945 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.716925464 podStartE2EDuration="36.716925464s" podCreationTimestamp="2026-02-17 16:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:13.7079469 +0000 UTC m=+1706.124964908" watchObservedRunningTime="2026-02-17 16:23:13.716925464 +0000 UTC m=+1706.133943442" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.755907 4829 scope.go:117] "RemoveContainer" containerID="60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.781985 4829 scope.go:117] "RemoveContainer" containerID="49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.847161 4829 scope.go:117] "RemoveContainer" containerID="d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.909314 4829 scope.go:117] "RemoveContainer" containerID="4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.971019 4829 scope.go:117] "RemoveContainer" containerID="8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.281645 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.366891 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.366949 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.367104 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.368234 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596193 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596673 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596689 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596714 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596722 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596737 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596743 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596755 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596760 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596970 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596996 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.597806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602472 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602728 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602951 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.622286 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.673903 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674549 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674665 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777454 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777506 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.784391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.784613 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.785431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.795878 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.920683 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:18 crc kubenswrapper[4829]: E0217 16:23:18.294819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:18 crc kubenswrapper[4829]: I0217 16:23:18.611291 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:18 crc kubenswrapper[4829]: I0217 16:23:18.738734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerStarted","Data":"a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805"} Feb 17 16:23:19 crc kubenswrapper[4829]: I0217 16:23:19.280546 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:19 crc kubenswrapper[4829]: E0217 16:23:19.281117 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:20 crc kubenswrapper[4829]: I0217 16:23:20.358255 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3fdb8e01-6d92-47be-a6a8-4d2e39d42152"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3fdb8e01-6d92-47be-a6a8-4d2e39d42152] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3fdb8e01_6d92_47be_a6a8_4d2e39d42152.slice" Feb 17 16:23:26 crc kubenswrapper[4829]: I0217 16:23:26.833756 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:23:26 crc kubenswrapper[4829]: I0217 16:23:26.938615 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:27 crc kubenswrapper[4829]: I0217 16:23:27.579831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:29 crc kubenswrapper[4829]: E0217 16:23:29.319745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:29 crc kubenswrapper[4829]: I0217 16:23:29.909369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerStarted","Data":"f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c"} Feb 17 16:23:29 crc kubenswrapper[4829]: I0217 16:23:29.936316 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" podStartSLOduration=2.179034714 podStartE2EDuration="12.936296555s" podCreationTimestamp="2026-02-17 16:23:17 +0000 UTC" firstStartedPulling="2026-02-17 16:23:18.587854274 +0000 UTC m=+1711.004872242" lastFinishedPulling="2026-02-17 16:23:29.345112335 +0000 UTC m=+1721.762134083" observedRunningTime="2026-02-17 16:23:29.933290203 +0000 UTC m=+1722.350308181" watchObservedRunningTime="2026-02-17 16:23:29.936296555 +0000 UTC m=+1722.353314553" Feb 17 16:23:32 crc kubenswrapper[4829]: I0217 16:23:32.182356 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" containerID="cri-o://7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" gracePeriod=604795 Feb 17 16:23:32 crc kubenswrapper[4829]: E0217 16:23:32.282555 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:34 crc kubenswrapper[4829]: I0217 16:23:34.280624 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:34 crc kubenswrapper[4829]: E0217 16:23:34.281079 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:35 crc kubenswrapper[4829]: I0217 16:23:35.227202 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.860476 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.948645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949007 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949031 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949146 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949204 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949364 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949387 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.951150 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.951924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.952946 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.955758 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.956271 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.956659 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info" (OuterVolumeSpecName: "pod-info") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.968002 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.973047 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.973216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2" (OuterVolumeSpecName: "kube-api-access-vm5t2") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "kube-api-access-vm5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.993872 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data" (OuterVolumeSpecName: "config-data") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.019477 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f" (OuterVolumeSpecName: "persistence") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "pvc-84d96401-ecc6-4b20-91e2-fae52f90027f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020476 4829 generic.go:334] "Generic (PLEG): container finished" podID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" exitCode=0 Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020548 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928"} Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020550 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020565 4829 scope.go:117] "RemoveContainer" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.050670 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf" (OuterVolumeSpecName: "server-conf") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054929 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054961 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054994 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") on node \"crc\" " Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055006 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055014 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055023 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055033 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055041 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055049 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.101184 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.101326 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-84d96401-ecc6-4b20-91e2-fae52f90027f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f") on node "crc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.120280 4829 scope.go:117] "RemoveContainer" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.141046 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142178 4829 scope.go:117] "RemoveContainer" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.142697 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": container with ID starting with 7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc not found: ID does not exist" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142730 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} err="failed to get container status \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": rpc error: code = NotFound desc = could not find container \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": container with ID starting with 7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc not found: ID does not exist" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142750 4829 scope.go:117] "RemoveContainer" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.143050 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": container with ID starting with 42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847 not found: ID does not exist" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.143092 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} err="failed to get container status \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": rpc error: code = NotFound desc = could not find container \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": container with ID starting with 42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847 not found: ID does not exist" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.157427 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.157461 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.359991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.417269 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.429402 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.430041 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="setup-container" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430073 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="setup-container" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.430083 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430092 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430401 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.432119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.453748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599031 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599427 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599629 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599660 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599709 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599744 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599762 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702246 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702400 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702418 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702522 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.703512 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.705161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.705939 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.706505 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.706530 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b279f517412c9d421e4d384ad7a1032e9021db2370e77c854a0ec0125cf75d39/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.707692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.708777 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.709175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.709693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.710738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.726846 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.759593 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.872753 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:40 crc kubenswrapper[4829]: I0217 16:23:40.301634 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" path="/var/lib/kubelet/pods/328bcfe0-93b6-44bb-83ca-2b3a105f1548/volumes" Feb 17 16:23:40 crc kubenswrapper[4829]: I0217 16:23:40.389364 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.064232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"b2d0281e8cc1c30da8422e8269380efafdb42c42ab81ddf3b4f0cc192a279788"} Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.067434 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerID="f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c" exitCode=0 Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.067497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerDied","Data":"f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c"} Feb 17 16:23:42 crc kubenswrapper[4829]: I0217 16:23:42.848738 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.002731 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.002845 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.003051 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.003202 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.008421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd" (OuterVolumeSpecName: "kube-api-access-7p2rd") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "kube-api-access-7p2rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.011909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.042559 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.047864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory" (OuterVolumeSpecName: "inventory") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.090446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d"} Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerDied","Data":"a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805"} Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094646 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094705 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112616 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112690 4829 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112708 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112720 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.193864 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:43 crc kubenswrapper[4829]: E0217 16:23:43.196139 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.196187 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.196500 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.197385 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.210919 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249639 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249807 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249952 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.250166 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.316940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.317070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.317105 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419662 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.423981 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.429928 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.450544 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.562776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:44 crc kubenswrapper[4829]: I0217 16:23:44.185801 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400489 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400548 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400707 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.401909 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.118748 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerStarted","Data":"1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21"} Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.119094 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerStarted","Data":"de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40"} Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.145373 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" podStartSLOduration=1.748254218 podStartE2EDuration="2.145351473s" podCreationTimestamp="2026-02-17 16:23:43 +0000 UTC" firstStartedPulling="2026-02-17 16:23:44.191146064 +0000 UTC m=+1736.608164062" lastFinishedPulling="2026-02-17 16:23:44.588243329 +0000 UTC m=+1737.005261317" observedRunningTime="2026-02-17 16:23:45.13828782 +0000 UTC m=+1737.555305798" watchObservedRunningTime="2026-02-17 16:23:45.145351473 +0000 UTC m=+1737.562369451" Feb 17 16:23:45 crc kubenswrapper[4829]: E0217 16:23:45.281087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:48 crc kubenswrapper[4829]: I0217 16:23:48.171450 4829 generic.go:334] "Generic (PLEG): container finished" podID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerID="1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21" exitCode=0 Feb 17 16:23:48 crc kubenswrapper[4829]: I0217 16:23:48.171502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerDied","Data":"1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21"} Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.280055 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:49 crc kubenswrapper[4829]: E0217 16:23:49.281043 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.822737 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.991837 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.991961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.992276 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.000063 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr" (OuterVolumeSpecName: "kube-api-access-kc2sr") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "kube-api-access-kc2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.026785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory" (OuterVolumeSpecName: "inventory") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.030214 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095259 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095293 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095303 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerDied","Data":"de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40"} Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201409 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201468 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.319007 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:50 crc kubenswrapper[4829]: E0217 16:23:50.319819 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.319842 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.320280 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.321673 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.324446 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.324938 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.325297 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.326290 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.333917 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504666 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.607878 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.607975 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.608020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.608367 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.611686 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.612232 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.614244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.640886 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.652090 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:51 crc kubenswrapper[4829]: I0217 16:23:51.268397 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:51 crc kubenswrapper[4829]: W0217 16:23:51.280823 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f00333b_9c18_4a8c_b409_2961da9afccc.slice/crio-78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a WatchSource:0}: Error finding container 78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a: Status 404 returned error can't find the container with id 78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a Feb 17 16:23:52 crc kubenswrapper[4829]: I0217 16:23:52.232455 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerStarted","Data":"78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a"} Feb 17 16:23:54 crc kubenswrapper[4829]: I0217 16:23:54.259156 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerStarted","Data":"dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802"} Feb 17 16:23:54 crc kubenswrapper[4829]: I0217 16:23:54.286420 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" podStartSLOduration=2.562824317 podStartE2EDuration="4.286397819s" podCreationTimestamp="2026-02-17 16:23:50 +0000 UTC" firstStartedPulling="2026-02-17 16:23:51.282990546 +0000 UTC m=+1743.700008524" lastFinishedPulling="2026-02-17 16:23:53.006564038 +0000 UTC m=+1745.423582026" observedRunningTime="2026-02-17 16:23:54.279971595 +0000 UTC m=+1746.696989573" watchObservedRunningTime="2026-02-17 16:23:54.286397819 +0000 UTC m=+1746.703415797" Feb 17 16:23:57 crc kubenswrapper[4829]: E0217 16:23:57.282399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:00 crc kubenswrapper[4829]: I0217 16:24:00.280753 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.281399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415383 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415465 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415662 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.417268 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:10 crc kubenswrapper[4829]: E0217 16:24:10.281215 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:11 crc kubenswrapper[4829]: E0217 16:24:11.282060 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:13 crc kubenswrapper[4829]: I0217 16:24:13.280280 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:13 crc kubenswrapper[4829]: E0217 16:24:13.281171 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.215931 4829 scope.go:117] "RemoveContainer" containerID="7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.239567 4829 scope.go:117] "RemoveContainer" containerID="5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.311306 4829 scope.go:117] "RemoveContainer" containerID="ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.336863 4829 scope.go:117] "RemoveContainer" containerID="add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.509130 4829 generic.go:334] "Generic (PLEG): container finished" podID="342647d1-5339-47e5-b35c-80b4406a2ea6" containerID="36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d" exitCode=0 Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.509182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerDied","Data":"36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d"} Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.522659 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"abfe536e361127215a0200d70dc18ee7b043da3413cd9902d21e30e5460979b4"} Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.523367 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.558564 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.558546437 podStartE2EDuration="37.558546437s" podCreationTimestamp="2026-02-17 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:16.550282352 +0000 UTC m=+1768.967300330" watchObservedRunningTime="2026-02-17 16:24:16.558546437 +0000 UTC m=+1768.975564415" Feb 17 16:24:24 crc kubenswrapper[4829]: I0217 16:24:24.278955 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:24 crc kubenswrapper[4829]: E0217 16:24:24.279664 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:25 crc kubenswrapper[4829]: E0217 16:24:25.281829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:26 crc kubenswrapper[4829]: E0217 16:24:26.282610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:29 crc kubenswrapper[4829]: I0217 16:24:29.875863 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:24:29 crc kubenswrapper[4829]: I0217 16:24:29.939499 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:34 crc kubenswrapper[4829]: I0217 16:24:34.360941 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" containerID="cri-o://ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" gracePeriod=604796 Feb 17 16:24:35 crc kubenswrapper[4829]: I0217 16:24:35.203212 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:24:36 crc kubenswrapper[4829]: I0217 16:24:36.280917 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:36 crc kubenswrapper[4829]: E0217 16:24:36.281840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:36 crc kubenswrapper[4829]: E0217 16:24:36.282852 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:41 crc kubenswrapper[4829]: E0217 16:24:41.299821 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.578468 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764681 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.768928 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.768983 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769027 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769141 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769259 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769317 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769342 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770128 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770526 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771305 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771327 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771340 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.772030 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8" (OuterVolumeSpecName: "kube-api-access-lz7m8") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "kube-api-access-lz7m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.772180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.781346 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info" (OuterVolumeSpecName: "pod-info") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.785414 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.798354 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9" (OuterVolumeSpecName: "persistence") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.822942 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data" (OuterVolumeSpecName: "config-data") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.857026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf" (OuterVolumeSpecName: "server-conf") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865434 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" exitCode=0 Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865509 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc"} Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865528 4829 scope.go:117] "RemoveContainer" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.867459 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874637 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874691 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874701 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874710 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874725 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874738 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874780 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") on node \"crc\" " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.920883 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.921210 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9") on node "crc" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.975772 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.976432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.977657 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: W0217 16:24:41.977800 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ee690a85-cf83-4e55-a69d-ca6bd136bf07/volumes/kubernetes.io~projected/rabbitmq-confd Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.977880 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.079356 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.085735 4829 scope.go:117] "RemoveContainer" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.116233 4829 scope.go:117] "RemoveContainer" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.117236 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": container with ID starting with ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319 not found: ID does not exist" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117269 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} err="failed to get container status \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": rpc error: code = NotFound desc = could not find container \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": container with ID starting with ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319 not found: ID does not exist" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117293 4829 scope.go:117] "RemoveContainer" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.117628 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": container with ID starting with 86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a not found: ID does not exist" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117769 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} err="failed to get container status \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": rpc error: code = NotFound desc = could not find container \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": container with ID starting with 86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a not found: ID does not exist" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.224591 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.239687 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.326976 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" path="/var/lib/kubelet/pods/ee690a85-cf83-4e55-a69d-ca6bd136bf07/volumes" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.327638 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.328003 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="setup-container" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328014 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="setup-container" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.328046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328052 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328252 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.329623 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.329699 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.501870 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.501977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502849 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605352 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605732 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605769 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605840 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606003 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606308 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607591 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.609283 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.609335 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2fb41440360b87637c863c905d7642fdbb5fac4b43922d0db49761300e3e982/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611055 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611167 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611726 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.613096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.636048 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.713406 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.955891 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:43 crc kubenswrapper[4829]: I0217 16:24:43.521203 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:43 crc kubenswrapper[4829]: I0217 16:24:43.891171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"8cbb4822f62f78253042dcb81e07985af5147d86b60f491f906f8010915fbb28"} Feb 17 16:24:46 crc kubenswrapper[4829]: I0217 16:24:46.938249 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0"} Feb 17 16:24:47 crc kubenswrapper[4829]: I0217 16:24:47.279732 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:47 crc kubenswrapper[4829]: E0217 16:24:47.280388 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:50 crc kubenswrapper[4829]: E0217 16:24:50.281982 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:56 crc kubenswrapper[4829]: E0217 16:24:56.282644 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:02 crc kubenswrapper[4829]: I0217 16:25:02.280391 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:02 crc kubenswrapper[4829]: E0217 16:25:02.281471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:25:03 crc kubenswrapper[4829]: E0217 16:25:03.282373 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:10 crc kubenswrapper[4829]: E0217 16:25:10.284927 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:17 crc kubenswrapper[4829]: I0217 16:25:17.280166 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:17 crc kubenswrapper[4829]: E0217 16:25:17.283264 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425130 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425406 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425512 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.426670 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:19 crc kubenswrapper[4829]: I0217 16:25:19.430036 4829 generic.go:334] "Generic (PLEG): container finished" podID="feaa3649-f3db-44ac-8054-cd13296c0845" containerID="e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0" exitCode=0 Feb 17 16:25:19 crc kubenswrapper[4829]: I0217 16:25:19.430193 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerDied","Data":"e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0"} Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.445471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"3ad375d29c751ca67e9ead9056f161b8c22463b18f6e4a157e0f7a0a8768addb"} Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.446082 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.478516 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.478494203 podStartE2EDuration="38.478494203s" podCreationTimestamp="2026-02-17 16:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:20.471992836 +0000 UTC m=+1832.889010854" watchObservedRunningTime="2026-02-17 16:25:20.478494203 +0000 UTC m=+1832.895512181" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.408525 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.409081 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.409286 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.410554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.506530 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.511349 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.520676 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.598753 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.599079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.599187 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701398 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.702251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.702288 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.724409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.849478 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:30 crc kubenswrapper[4829]: E0217 16:25:30.280983 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:30 crc kubenswrapper[4829]: I0217 16:25:30.353914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:30 crc kubenswrapper[4829]: I0217 16:25:30.570240 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"787ba6ce4d84d1c2d3fed84fb2ed9b68fbb7b8f0c893e7970515e42d02dec566"} Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.306281 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.585643 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" exitCode=0 Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.585682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.597548 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.600363 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.960163 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:25:34 crc kubenswrapper[4829]: I0217 16:25:34.626351 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" exitCode=0 Feb 17 16:25:34 crc kubenswrapper[4829]: I0217 16:25:34.626491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} Feb 17 16:25:35 crc kubenswrapper[4829]: I0217 16:25:35.643972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} Feb 17 16:25:35 crc kubenswrapper[4829]: I0217 16:25:35.672058 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvqbf" podStartSLOduration=3.1027734750000002 podStartE2EDuration="6.672036977s" podCreationTimestamp="2026-02-17 16:25:29 +0000 UTC" firstStartedPulling="2026-02-17 16:25:31.588012975 +0000 UTC m=+1844.005030953" lastFinishedPulling="2026-02-17 16:25:35.157276477 +0000 UTC m=+1847.574294455" observedRunningTime="2026-02-17 16:25:35.661518681 +0000 UTC m=+1848.078536679" watchObservedRunningTime="2026-02-17 16:25:35.672036977 +0000 UTC m=+1848.089054965" Feb 17 16:25:38 crc kubenswrapper[4829]: E0217 16:25:38.296318 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.849755 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.850101 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.912554 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:40 crc kubenswrapper[4829]: I0217 16:25:40.790462 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:40 crc kubenswrapper[4829]: I0217 16:25:40.848522 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:42 crc kubenswrapper[4829]: I0217 16:25:42.741136 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvqbf" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" containerID="cri-o://0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" gracePeriod=2 Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.280727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.354514 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.491854 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.492132 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.492164 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.493062 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities" (OuterVolumeSpecName: "utilities") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.499217 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p" (OuterVolumeSpecName: "kube-api-access-dqk7p") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "kube-api-access-dqk7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.542357 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595172 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595198 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595208 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758209 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" exitCode=0 Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758278 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758319 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"787ba6ce4d84d1c2d3fed84fb2ed9b68fbb7b8f0c893e7970515e42d02dec566"} Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758348 4829 scope.go:117] "RemoveContainer" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.760739 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.809438 4829 scope.go:117] "RemoveContainer" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.822636 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.837912 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.845528 4829 scope.go:117] "RemoveContainer" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.908161 4829 scope.go:117] "RemoveContainer" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.908996 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": container with ID starting with 0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4 not found: ID does not exist" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909040 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} err="failed to get container status \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": rpc error: code = NotFound desc = could not find container \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": container with ID starting with 0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4 not found: ID does not exist" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909088 4829 scope.go:117] "RemoveContainer" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.909462 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": container with ID starting with 9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307 not found: ID does not exist" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909508 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} err="failed to get container status \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": rpc error: code = NotFound desc = could not find container \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": container with ID starting with 9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307 not found: ID does not exist" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909540 4829 scope.go:117] "RemoveContainer" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.909980 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": container with ID starting with c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27 not found: ID does not exist" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.910022 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27"} err="failed to get container status \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": rpc error: code = NotFound desc = could not find container \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": container with ID starting with c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27 not found: ID does not exist" Feb 17 16:25:44 crc kubenswrapper[4829]: I0217 16:25:44.295254 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" path="/var/lib/kubelet/pods/f33a93a0-671d-4454-a62b-9d8f6e0b9f73/volumes" Feb 17 16:25:52 crc kubenswrapper[4829]: E0217 16:25:52.281810 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:58 crc kubenswrapper[4829]: E0217 16:25:58.288770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:07 crc kubenswrapper[4829]: E0217 16:26:07.283156 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:11 crc kubenswrapper[4829]: E0217 16:26:11.282314 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:15 crc kubenswrapper[4829]: I0217 16:26:15.542879 4829 scope.go:117] "RemoveContainer" containerID="916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404" Feb 17 16:26:15 crc kubenswrapper[4829]: I0217 16:26:15.574692 4829 scope.go:117] "RemoveContainer" containerID="09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" Feb 17 16:26:20 crc kubenswrapper[4829]: E0217 16:26:20.282944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:25 crc kubenswrapper[4829]: E0217 16:26:25.283513 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.067347 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.078321 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.087762 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.105750 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:26:33 crc kubenswrapper[4829]: E0217 16:26:33.282988 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:34 crc kubenswrapper[4829]: I0217 16:26:34.295335 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" path="/var/lib/kubelet/pods/91c18e73-013c-4a4d-a4cc-922f43fccf45/volumes" Feb 17 16:26:34 crc kubenswrapper[4829]: I0217 16:26:34.297031 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" path="/var/lib/kubelet/pods/aaa06d20-74dd-41b6-822b-485fdf6cc6d5/volumes" Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.034557 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.051260 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.064797 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.074667 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:26:36 crc kubenswrapper[4829]: I0217 16:26:36.293167 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" path="/var/lib/kubelet/pods/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d/volumes" Feb 17 16:26:36 crc kubenswrapper[4829]: I0217 16:26:36.294296 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" path="/var/lib/kubelet/pods/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef/volumes" Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.032789 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.045218 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.449419 4829 generic.go:334] "Generic (PLEG): container finished" podID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerID="dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802" exitCode=0 Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.449479 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerDied","Data":"dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802"} Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.033716 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.045698 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.294242 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" path="/var/lib/kubelet/pods/406819b6-b859-4d4d-93ee-43180f5981bf/volumes" Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.295434 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" path="/var/lib/kubelet/pods/ea266eaa-6bce-499f-9891-ca9ec670e465/volumes" Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.911321 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.088811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.088932 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.089032 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.089137 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.095004 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j" (OuterVolumeSpecName: "kube-api-access-8hf5j") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "kube-api-access-8hf5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.098300 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.121986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.127895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory" (OuterVolumeSpecName: "inventory") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.192957 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193282 4829 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193292 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193303 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.281384 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerDied","Data":"78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a"} Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478332 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478395 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576160 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.576899 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576935 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.576959 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-utilities" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576969 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-utilities" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.577014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577023 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.577044 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-content" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-content" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577372 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577395 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.578723 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582163 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582489 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.583050 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.598116 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603371 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603495 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706819 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.711057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.720718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.730090 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.902846 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.054660 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.073104 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.294028 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" path="/var/lib/kubelet/pods/e03006c3-35b5-45e5-9b9f-578a8eabbf22/volumes" Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.491721 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.046620 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.062519 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.505903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerStarted","Data":"e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf"} Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.505953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerStarted","Data":"4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313"} Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.537866 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" podStartSLOduration=2.119700583 podStartE2EDuration="2.537844065s" podCreationTimestamp="2026-02-17 16:26:39 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.500720819 +0000 UTC m=+1912.917738797" lastFinishedPulling="2026-02-17 16:26:40.918864301 +0000 UTC m=+1913.335882279" observedRunningTime="2026-02-17 16:26:41.525749507 +0000 UTC m=+1913.942767495" watchObservedRunningTime="2026-02-17 16:26:41.537844065 +0000 UTC m=+1913.954862053" Feb 17 16:26:42 crc kubenswrapper[4829]: I0217 16:26:42.296917 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" path="/var/lib/kubelet/pods/e50b4954-d1c6-451e-b8f4-3ba817c89c6b/volumes" Feb 17 16:26:44 crc kubenswrapper[4829]: E0217 16:26:44.286521 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:50 crc kubenswrapper[4829]: E0217 16:26:50.283205 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.037682 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.048151 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.058282 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.070369 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:26:58 crc kubenswrapper[4829]: E0217 16:26:58.289819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.302599 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c492d16-f301-449b-a877-a15a17739865" path="/var/lib/kubelet/pods/5c492d16-f301-449b-a877-a15a17739865/volumes" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.303901 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" path="/var/lib/kubelet/pods/f2e81e7f-9610-493c-bdb8-6a7de58b94bf/volumes" Feb 17 16:27:01 crc kubenswrapper[4829]: I0217 16:27:01.050696 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:27:01 crc kubenswrapper[4829]: I0217 16:27:01.064550 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:27:02 crc kubenswrapper[4829]: E0217 16:27:02.283716 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:02 crc kubenswrapper[4829]: I0217 16:27:02.303530 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" path="/var/lib/kubelet/pods/df678697-9139-4571-9d3b-9c51ec34df7c/volumes" Feb 17 16:27:08 crc kubenswrapper[4829]: I0217 16:27:08.515871 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" podUID="90b368e2-73a9-4594-8428-e17a7bb1e499" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:09 crc kubenswrapper[4829]: I0217 16:27:09.038476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:27:09 crc kubenswrapper[4829]: I0217 16:27:09.048227 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:27:10 crc kubenswrapper[4829]: E0217 16:27:10.284760 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:10 crc kubenswrapper[4829]: I0217 16:27:10.298965 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" path="/var/lib/kubelet/pods/e14bea24-3170-4bdb-8811-9a94d94ae4b7/volumes" Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.062122 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.072394 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.292165 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" path="/var/lib/kubelet/pods/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4/volumes" Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.047717 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.062289 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.079398 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.099318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.110310 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.119406 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.128167 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.136816 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.145566 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.154008 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.162764 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.171402 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.181727 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.190948 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:27:13 crc kubenswrapper[4829]: E0217 16:27:13.282375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.301280 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" path="/var/lib/kubelet/pods/45907bce-01ca-47e8-bfef-12ae037bb254/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.302934 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" path="/var/lib/kubelet/pods/5fb73f59-cddf-4630-b754-264ec2ccee1e/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.304236 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64394b7b-175f-4429-b284-783394b5362b" path="/var/lib/kubelet/pods/64394b7b-175f-4429-b284-783394b5362b/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.305443 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" path="/var/lib/kubelet/pods/84ad18d3-95f7-43e4-b906-65466cf9b14f/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.307708 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" path="/var/lib/kubelet/pods/964c7b6b-c551-489a-9a5b-7fbe31c855b2/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.309881 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" path="/var/lib/kubelet/pods/a1857247-1b55-4f04-91b5-2725347ddd5e/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.310696 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" path="/var/lib/kubelet/pods/f7208dff-6f9e-410a-9b88-e6def8b38478/volumes" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.697247 4829 scope.go:117] "RemoveContainer" containerID="20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.859927 4829 scope.go:117] "RemoveContainer" containerID="0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.884556 4829 scope.go:117] "RemoveContainer" containerID="a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.953980 4829 scope.go:117] "RemoveContainer" containerID="4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.003533 4829 scope.go:117] "RemoveContainer" containerID="42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.072560 4829 scope.go:117] "RemoveContainer" containerID="2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.113148 4829 scope.go:117] "RemoveContainer" containerID="50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.166175 4829 scope.go:117] "RemoveContainer" containerID="17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.192456 4829 scope.go:117] "RemoveContainer" containerID="97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.223693 4829 scope.go:117] "RemoveContainer" containerID="e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.250974 4829 scope.go:117] "RemoveContainer" containerID="2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.283412 4829 scope.go:117] "RemoveContainer" containerID="459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.328902 4829 scope.go:117] "RemoveContainer" containerID="717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.358868 4829 scope.go:117] "RemoveContainer" containerID="78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.384404 4829 scope.go:117] "RemoveContainer" containerID="61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.406796 4829 scope.go:117] "RemoveContainer" containerID="718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.430706 4829 scope.go:117] "RemoveContainer" containerID="1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.451969 4829 scope.go:117] "RemoveContainer" containerID="50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.470707 4829 scope.go:117] "RemoveContainer" containerID="17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.491788 4829 scope.go:117] "RemoveContainer" containerID="414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.510799 4829 scope.go:117] "RemoveContainer" containerID="e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.532701 4829 scope.go:117] "RemoveContainer" containerID="6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.551402 4829 scope.go:117] "RemoveContainer" containerID="e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c" Feb 17 16:27:21 crc kubenswrapper[4829]: E0217 16:27:21.283080 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:28 crc kubenswrapper[4829]: E0217 16:27:28.291891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:29 crc kubenswrapper[4829]: I0217 16:27:29.040730 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:27:29 crc kubenswrapper[4829]: I0217 16:27:29.052529 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:27:30 crc kubenswrapper[4829]: I0217 16:27:30.293679 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" path="/var/lib/kubelet/pods/3fd83d7c-5347-49c7-a979-d63e812d294c/volumes" Feb 17 16:27:36 crc kubenswrapper[4829]: E0217 16:27:36.287302 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:39 crc kubenswrapper[4829]: E0217 16:27:39.280682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:49 crc kubenswrapper[4829]: E0217 16:27:49.282979 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:51 crc kubenswrapper[4829]: E0217 16:27:51.282441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:52 crc kubenswrapper[4829]: I0217 16:27:52.424535 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:52 crc kubenswrapper[4829]: I0217 16:27:52.424903 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.418407 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.419204 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.419530 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.421610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:04 crc kubenswrapper[4829]: E0217 16:28:04.284778 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:13 crc kubenswrapper[4829]: I0217 16:28:13.047875 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:28:13 crc kubenswrapper[4829]: I0217 16:28:13.058844 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:28:13 crc kubenswrapper[4829]: E0217 16:28:13.282704 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.040298 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.053175 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.066185 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.077349 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.294088 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff4740d-5b36-4273-be02-50bec771e157" path="/var/lib/kubelet/pods/8ff4740d-5b36-4273-be02-50bec771e157/volumes" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.294865 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" path="/var/lib/kubelet/pods/acebba68-0142-4d4e-be34-e31a6ccb8722/volumes" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.295597 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" path="/var/lib/kubelet/pods/f8202be9-bbed-45eb-80af-de3018eb6ce2/volumes" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.415495 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.415915 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.416048 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.417345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.017608 4829 scope.go:117] "RemoveContainer" containerID="0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.057875 4829 scope.go:117] "RemoveContainer" containerID="0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.108655 4829 scope.go:117] "RemoveContainer" containerID="1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.152961 4829 scope.go:117] "RemoveContainer" containerID="3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6" Feb 17 16:28:22 crc kubenswrapper[4829]: I0217 16:28:22.424924 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:22 crc kubenswrapper[4829]: I0217 16:28:22.425684 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.045182 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.068175 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.306171 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" path="/var/lib/kubelet/pods/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e/volumes" Feb 17 16:28:28 crc kubenswrapper[4829]: E0217 16:28:28.293425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.054791 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.067318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:28:30 crc kubenswrapper[4829]: E0217 16:28:30.282956 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.295138 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" path="/var/lib/kubelet/pods/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20/volumes" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.673234 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.680996 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.688165 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.761649 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.761925 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.762088 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863872 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.864359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.864398 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.885723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:38 crc kubenswrapper[4829]: I0217 16:28:38.019918 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:38 crc kubenswrapper[4829]: I0217 16:28:38.603328 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.044980 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" exitCode=0 Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.045054 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c"} Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.045277 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"5985bccf682a6daeb0c3e4594a3b5375cfeaccfafb2b267d869bbdd615d32ed6"} Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.048402 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:28:40 crc kubenswrapper[4829]: E0217 16:28:40.281787 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:41 crc kubenswrapper[4829]: I0217 16:28:41.066993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} Feb 17 16:28:42 crc kubenswrapper[4829]: I0217 16:28:42.077192 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" exitCode=0 Feb 17 16:28:42 crc kubenswrapper[4829]: I0217 16:28:42.077297 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} Feb 17 16:28:43 crc kubenswrapper[4829]: I0217 16:28:43.091411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} Feb 17 16:28:43 crc kubenswrapper[4829]: I0217 16:28:43.116823 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c9vfs" podStartSLOduration=2.577167532 podStartE2EDuration="6.116805144s" podCreationTimestamp="2026-02-17 16:28:37 +0000 UTC" firstStartedPulling="2026-02-17 16:28:39.048195151 +0000 UTC m=+2031.465213129" lastFinishedPulling="2026-02-17 16:28:42.587832753 +0000 UTC m=+2035.004850741" observedRunningTime="2026-02-17 16:28:43.10738498 +0000 UTC m=+2035.524402968" watchObservedRunningTime="2026-02-17 16:28:43.116805144 +0000 UTC m=+2035.533823132" Feb 17 16:28:43 crc kubenswrapper[4829]: E0217 16:28:43.280312 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:48 crc kubenswrapper[4829]: I0217 16:28:48.020434 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:48 crc kubenswrapper[4829]: I0217 16:28:48.020974 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:49 crc kubenswrapper[4829]: I0217 16:28:49.102410 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c9vfs" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:49 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:49 crc kubenswrapper[4829]: > Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425144 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425649 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425698 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.426760 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.426838 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" gracePeriod=600 Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.214735 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" exitCode=0 Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.214776 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.215343 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.215363 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:28:55 crc kubenswrapper[4829]: E0217 16:28:55.283330 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.082151 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.156741 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:58 crc kubenswrapper[4829]: E0217 16:28:58.299991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.329343 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:59 crc kubenswrapper[4829]: I0217 16:28:59.325158 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c9vfs" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" containerID="cri-o://25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" gracePeriod=2 Feb 17 16:28:59 crc kubenswrapper[4829]: I0217 16:28:59.949540 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.105885 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106082 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities" (OuterVolumeSpecName: "utilities") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.108935 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.112544 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv" (OuterVolumeSpecName: "kube-api-access-zl5dv") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "kube-api-access-zl5dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.152034 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.211861 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.211930 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344050 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" exitCode=0 Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"5985bccf682a6daeb0c3e4594a3b5375cfeaccfafb2b267d869bbdd615d32ed6"} Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344177 4829 scope.go:117] "RemoveContainer" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344502 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.377183 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.383859 4829 scope.go:117] "RemoveContainer" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.393097 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.410998 4829 scope.go:117] "RemoveContainer" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.461930 4829 scope.go:117] "RemoveContainer" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.462562 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": container with ID starting with 25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f not found: ID does not exist" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.462617 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} err="failed to get container status \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": rpc error: code = NotFound desc = could not find container \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": container with ID starting with 25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f not found: ID does not exist" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.462645 4829 scope.go:117] "RemoveContainer" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.462988 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": container with ID starting with d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f not found: ID does not exist" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463063 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} err="failed to get container status \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": rpc error: code = NotFound desc = could not find container \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": container with ID starting with d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f not found: ID does not exist" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463095 4829 scope.go:117] "RemoveContainer" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.463457 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": container with ID starting with f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c not found: ID does not exist" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463512 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c"} err="failed to get container status \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": rpc error: code = NotFound desc = could not find container \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": container with ID starting with f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c not found: ID does not exist" Feb 17 16:29:02 crc kubenswrapper[4829]: I0217 16:29:02.300979 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a49506-a612-4019-b32c-9e14503fda42" path="/var/lib/kubelet/pods/62a49506-a612-4019-b32c-9e14503fda42/volumes" Feb 17 16:29:07 crc kubenswrapper[4829]: E0217 16:29:07.282525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:09 crc kubenswrapper[4829]: E0217 16:29:09.281285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:17 crc kubenswrapper[4829]: I0217 16:29:17.310965 4829 scope.go:117] "RemoveContainer" containerID="b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc" Feb 17 16:29:17 crc kubenswrapper[4829]: I0217 16:29:17.349233 4829 scope.go:117] "RemoveContainer" containerID="e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6" Feb 17 16:29:19 crc kubenswrapper[4829]: I0217 16:29:19.055700 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:29:19 crc kubenswrapper[4829]: I0217 16:29:19.071782 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.039637 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.052365 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.064952 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.077937 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.090007 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.099300 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.127807 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.147081 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.160326 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.171441 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.291541 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" path="/var/lib/kubelet/pods/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.292168 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" path="/var/lib/kubelet/pods/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.292737 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544f59e2-daea-45db-99b4-d9714f620a74" path="/var/lib/kubelet/pods/544f59e2-daea-45db-99b4-d9714f620a74/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.293283 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" path="/var/lib/kubelet/pods/c8a9c261-a9c4-49c8-bec3-891a68d897b6/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.294480 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" path="/var/lib/kubelet/pods/c909da16-2d5d-4706-adb8-f8402ed9f01e/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.295163 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" path="/var/lib/kubelet/pods/dcdf2448-5ccb-4351-b022-de49263fd521/volumes" Feb 17 16:29:21 crc kubenswrapper[4829]: E0217 16:29:21.283333 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:21 crc kubenswrapper[4829]: E0217 16:29:21.283348 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:32 crc kubenswrapper[4829]: E0217 16:29:32.283840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:33 crc kubenswrapper[4829]: E0217 16:29:33.281474 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:44 crc kubenswrapper[4829]: E0217 16:29:44.283009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:46 crc kubenswrapper[4829]: E0217 16:29:46.282389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.528785 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-utilities" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530349 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-utilities" Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530459 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530475 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530527 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-content" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530540 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-content" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.531040 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.534059 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.550995 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.637120 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.638015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.638535 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741656 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.742773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.742834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.774653 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.859563 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.069186 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.092342 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.491451 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101644 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" exitCode=0 Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166"} Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101880 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"883efdb41339a304017f80a94e30713ad2829f6a86d10e2c04b2e00ce0d33fd2"} Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.294465 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" path="/var/lib/kubelet/pods/70d00488-ed97-4f10-bf11-7c57e5a4d631/volumes" Feb 17 16:29:58 crc kubenswrapper[4829]: E0217 16:29:58.295385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:59 crc kubenswrapper[4829]: E0217 16:29:59.284452 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.128325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.185023 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.187094 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.190231 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.190710 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.209082 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.333383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.334230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.334729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.437952 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.438151 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.439764 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.439812 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.452779 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.456492 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.516463 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:01 crc kubenswrapper[4829]: I0217 16:30:01.027530 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:01 crc kubenswrapper[4829]: W0217 16:30:01.031705 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3000c07b_e126_4f72_9667_251ca9a53989.slice/crio-9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f WatchSource:0}: Error finding container 9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f: Status 404 returned error can't find the container with id 9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f Feb 17 16:30:01 crc kubenswrapper[4829]: I0217 16:30:01.147722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerStarted","Data":"9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f"} Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.159887 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.159992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.164984 4829 generic.go:334] "Generic (PLEG): container finished" podID="3000c07b-e126-4f72-9667-251ca9a53989" containerID="95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.165062 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerDied","Data":"95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f"} Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.192228 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.274179 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqzdk" podStartSLOduration=2.569755908 podStartE2EDuration="7.274163375s" podCreationTimestamp="2026-02-17 16:29:56 +0000 UTC" firstStartedPulling="2026-02-17 16:29:58.105087169 +0000 UTC m=+2110.522105157" lastFinishedPulling="2026-02-17 16:30:02.809494646 +0000 UTC m=+2115.226512624" observedRunningTime="2026-02-17 16:30:03.236861025 +0000 UTC m=+2115.653879023" watchObservedRunningTime="2026-02-17 16:30:03.274163375 +0000 UTC m=+2115.691181353" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.720247 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865128 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.866554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume" (OuterVolumeSpecName: "config-volume") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.871983 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv" (OuterVolumeSpecName: "kube-api-access-q7vlv") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "kube-api-access-q7vlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.873374 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968420 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968752 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968961 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.204961 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerDied","Data":"9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f"} Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.205001 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.205015 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.831449 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.842889 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.880655 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:04 crc kubenswrapper[4829]: E0217 16:30:04.881281 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.881299 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.881671 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.883797 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.898683 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992468 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.094943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095049 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.118186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.201153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.735872 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227452 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" exitCode=0 Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227531 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7"} Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227837 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"820a1f3e598ecbaf9ce9d8dae39e9dfee320e0cb9b10ed62084cb316ab3f70a1"} Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.294765 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" path="/var/lib/kubelet/pods/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f/volumes" Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.860306 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.860618 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:07 crc kubenswrapper[4829]: I0217 16:30:07.239446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} Feb 17 16:30:07 crc kubenswrapper[4829]: I0217 16:30:07.917776 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" probeResult="failure" output=< Feb 17 16:30:07 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:30:07 crc kubenswrapper[4829]: > Feb 17 16:30:10 crc kubenswrapper[4829]: E0217 16:30:10.285186 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:11 crc kubenswrapper[4829]: E0217 16:30:11.284262 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:15 crc kubenswrapper[4829]: I0217 16:30:15.329858 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" exitCode=0 Feb 17 16:30:15 crc kubenswrapper[4829]: I0217 16:30:15.330014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.376177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.421620 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vg97x" podStartSLOduration=3.394077978 podStartE2EDuration="13.421597s" podCreationTimestamp="2026-02-17 16:30:04 +0000 UTC" firstStartedPulling="2026-02-17 16:30:06.229512914 +0000 UTC m=+2118.646530892" lastFinishedPulling="2026-02-17 16:30:16.257031936 +0000 UTC m=+2128.674049914" observedRunningTime="2026-02-17 16:30:17.398202887 +0000 UTC m=+2129.815220895" watchObservedRunningTime="2026-02-17 16:30:17.421597 +0000 UTC m=+2129.838615028" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.503431 4829 scope.go:117] "RemoveContainer" containerID="19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.530205 4829 scope.go:117] "RemoveContainer" containerID="56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.604026 4829 scope.go:117] "RemoveContainer" containerID="7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.650383 4829 scope.go:117] "RemoveContainer" containerID="a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.705019 4829 scope.go:117] "RemoveContainer" containerID="eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.762943 4829 scope.go:117] "RemoveContainer" containerID="163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.821624 4829 scope.go:117] "RemoveContainer" containerID="a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.844839 4829 scope.go:117] "RemoveContainer" containerID="18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.914469 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" probeResult="failure" output=< Feb 17 16:30:17 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:30:17 crc kubenswrapper[4829]: > Feb 17 16:30:21 crc kubenswrapper[4829]: I0217 16:30:21.040217 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:30:21 crc kubenswrapper[4829]: I0217 16:30:21.051360 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:30:22 crc kubenswrapper[4829]: E0217 16:30:22.282054 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:22 crc kubenswrapper[4829]: I0217 16:30:22.293645 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" path="/var/lib/kubelet/pods/bef56b6a-4a1c-4305-a88d-3654df130c52/volumes" Feb 17 16:30:23 crc kubenswrapper[4829]: E0217 16:30:23.280745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.031947 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.068076 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.077374 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.086961 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.202117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.202177 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.254081 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.501378 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.564660 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.041866 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.056403 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.296735 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" path="/var/lib/kubelet/pods/17cc49ce-4e47-470a-ad6b-a4127308a7e4/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.298535 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" path="/var/lib/kubelet/pods/264a77a9-afad-42ac-ac8f-7d705e242db5/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.300212 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" path="/var/lib/kubelet/pods/38fcc02f-9122-4ea6-bb0e-ef135805c127/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.924622 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.988797 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:27 crc kubenswrapper[4829]: I0217 16:30:27.473463 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vg97x" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" containerID="cri-o://5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" gracePeriod=2 Feb 17 16:30:27 crc kubenswrapper[4829]: I0217 16:30:27.889799 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.312659 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.363851 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.364172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.364420 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.365498 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities" (OuterVolumeSpecName: "utilities") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.365775 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.383771 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x" (OuterVolumeSpecName: "kube-api-access-lnj7x") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "kube-api-access-lnj7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.469312 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.502535 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" exitCode=0 Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.502975 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" containerID="cri-o://30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" gracePeriod=2 Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503853 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503875 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"820a1f3e598ecbaf9ce9d8dae39e9dfee320e0cb9b10ed62084cb316ab3f70a1"} Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503952 4829 scope.go:117] "RemoveContainer" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.552564 4829 scope.go:117] "RemoveContainer" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.556882 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.573121 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.629050 4829 scope.go:117] "RemoveContainer" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.752473 4829 scope.go:117] "RemoveContainer" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.752976 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": container with ID starting with 5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f not found: ID does not exist" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753018 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} err="failed to get container status \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": rpc error: code = NotFound desc = could not find container \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": container with ID starting with 5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753090 4829 scope.go:117] "RemoveContainer" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.753379 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": container with ID starting with 515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883 not found: ID does not exist" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753410 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} err="failed to get container status \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": rpc error: code = NotFound desc = could not find container \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": container with ID starting with 515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883 not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753428 4829 scope.go:117] "RemoveContainer" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.753678 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": container with ID starting with 3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7 not found: ID does not exist" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753710 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7"} err="failed to get container status \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": rpc error: code = NotFound desc = could not find container \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": container with ID starting with 3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7 not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.853395 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.864091 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.057367 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094662 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094774 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.095401 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities" (OuterVolumeSpecName: "utilities") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.095554 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.101684 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r" (OuterVolumeSpecName: "kube-api-access-jwc8r") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "kube-api-access-jwc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.168638 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.199505 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.199542 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521217 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" exitCode=0 Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521270 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"883efdb41339a304017f80a94e30713ad2829f6a86d10e2c04b2e00ce0d33fd2"} Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521323 4829 scope.go:117] "RemoveContainer" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521479 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.567782 4829 scope.go:117] "RemoveContainer" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.587000 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.599113 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.599322 4829 scope.go:117] "RemoveContainer" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.632795 4829 scope.go:117] "RemoveContainer" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.633304 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": container with ID starting with 30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33 not found: ID does not exist" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633359 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} err="failed to get container status \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": rpc error: code = NotFound desc = could not find container \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": container with ID starting with 30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33 not found: ID does not exist" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633394 4829 scope.go:117] "RemoveContainer" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.633870 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": container with ID starting with 5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5 not found: ID does not exist" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633899 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} err="failed to get container status \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": rpc error: code = NotFound desc = could not find container \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": container with ID starting with 5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5 not found: ID does not exist" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633919 4829 scope.go:117] "RemoveContainer" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.634198 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": container with ID starting with 96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166 not found: ID does not exist" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.634224 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166"} err="failed to get container status \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": rpc error: code = NotFound desc = could not find container \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": container with ID starting with 96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166 not found: ID does not exist" Feb 17 16:30:30 crc kubenswrapper[4829]: I0217 16:30:30.297785 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" path="/var/lib/kubelet/pods/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f/volumes" Feb 17 16:30:30 crc kubenswrapper[4829]: I0217 16:30:30.298853 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" path="/var/lib/kubelet/pods/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a/volumes" Feb 17 16:30:35 crc kubenswrapper[4829]: E0217 16:30:35.283170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:36 crc kubenswrapper[4829]: E0217 16:30:36.282154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:49 crc kubenswrapper[4829]: E0217 16:30:49.285416 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:50 crc kubenswrapper[4829]: E0217 16:30:50.282017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:52 crc kubenswrapper[4829]: I0217 16:30:52.424458 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:52 crc kubenswrapper[4829]: I0217 16:30:52.424543 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:03 crc kubenswrapper[4829]: E0217 16:31:03.281834 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:03 crc kubenswrapper[4829]: E0217 16:31:03.281858 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.066325 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.082958 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.302975 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" path="/var/lib/kubelet/pods/85602fcf-2cee-4c92-8270-623eb79c4baa/volumes" Feb 17 16:31:15 crc kubenswrapper[4829]: E0217 16:31:15.282299 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.021177 4829 scope.go:117] "RemoveContainer" containerID="ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.047844 4829 scope.go:117] "RemoveContainer" containerID="035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.117561 4829 scope.go:117] "RemoveContainer" containerID="1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.189853 4829 scope.go:117] "RemoveContainer" containerID="162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.256845 4829 scope.go:117] "RemoveContainer" containerID="c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447" Feb 17 16:31:18 crc kubenswrapper[4829]: E0217 16:31:18.299870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:22 crc kubenswrapper[4829]: I0217 16:31:22.424737 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:22 crc kubenswrapper[4829]: I0217 16:31:22.425389 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:27 crc kubenswrapper[4829]: E0217 16:31:27.281233 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:33 crc kubenswrapper[4829]: E0217 16:31:33.282061 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:41 crc kubenswrapper[4829]: E0217 16:31:41.283293 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:44 crc kubenswrapper[4829]: E0217 16:31:44.281096 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424136 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424598 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.425459 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.425501 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" gracePeriod=600 Feb 17 16:31:52 crc kubenswrapper[4829]: E0217 16:31:52.563123 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.514055 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" exitCode=0 Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.514120 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.515244 4829 scope.go:117] "RemoveContainer" containerID="c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.516042 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:31:53 crc kubenswrapper[4829]: E0217 16:31:53.516527 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:31:54 crc kubenswrapper[4829]: E0217 16:31:54.281378 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:56 crc kubenswrapper[4829]: E0217 16:31:56.280723 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:06 crc kubenswrapper[4829]: I0217 16:32:06.280601 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:06 crc kubenswrapper[4829]: E0217 16:32:06.281423 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:08 crc kubenswrapper[4829]: E0217 16:32:08.295814 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:09 crc kubenswrapper[4829]: E0217 16:32:09.281473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:20 crc kubenswrapper[4829]: E0217 16:32:20.283211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:21 crc kubenswrapper[4829]: I0217 16:32:21.279687 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:21 crc kubenswrapper[4829]: E0217 16:32:21.280392 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:21 crc kubenswrapper[4829]: E0217 16:32:21.281542 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:32 crc kubenswrapper[4829]: I0217 16:32:32.281186 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:32 crc kubenswrapper[4829]: E0217 16:32:32.282291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:34 crc kubenswrapper[4829]: E0217 16:32:34.284613 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:35 crc kubenswrapper[4829]: E0217 16:32:35.281078 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:44 crc kubenswrapper[4829]: I0217 16:32:44.279700 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:44 crc kubenswrapper[4829]: E0217 16:32:44.280424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:46 crc kubenswrapper[4829]: E0217 16:32:46.281942 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:49 crc kubenswrapper[4829]: E0217 16:32:49.281908 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:57 crc kubenswrapper[4829]: E0217 16:32:57.281708 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:58 crc kubenswrapper[4829]: I0217 16:32:58.286935 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:58 crc kubenswrapper[4829]: E0217 16:32:58.287543 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:01 crc kubenswrapper[4829]: E0217 16:33:01.280863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.416261 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.416905 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.417073 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.418233 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:11 crc kubenswrapper[4829]: I0217 16:33:11.280430 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:11 crc kubenswrapper[4829]: E0217 16:33:11.281271 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:13 crc kubenswrapper[4829]: E0217 16:33:13.281619 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:23 crc kubenswrapper[4829]: I0217 16:33:23.280017 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:23 crc kubenswrapper[4829]: E0217 16:33:23.280638 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:23 crc kubenswrapper[4829]: E0217 16:33:23.282525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.408969 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.409981 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.410210 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.411442 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:35 crc kubenswrapper[4829]: I0217 16:33:35.280414 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:35 crc kubenswrapper[4829]: E0217 16:33:35.281309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:36 crc kubenswrapper[4829]: I0217 16:33:36.607691 4829 generic.go:334] "Generic (PLEG): container finished" podID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerID="e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf" exitCode=2 Feb 17 16:33:36 crc kubenswrapper[4829]: I0217 16:33:36.610654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerDied","Data":"e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf"} Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.214846 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:33:38 crc kubenswrapper[4829]: E0217 16:33:38.308255 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.328555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.329463 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.329612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.347494 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt" (OuterVolumeSpecName: "kube-api-access-gwzvt") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "kube-api-access-gwzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.365703 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.376741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory" (OuterVolumeSpecName: "inventory") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438726 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438768 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438784 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634296 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerDied","Data":"4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313"} Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634354 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634358 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:33:39 crc kubenswrapper[4829]: E0217 16:33:39.284187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.035781 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036887 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036904 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036923 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036931 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036948 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036956 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036966 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036974 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037004 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037012 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037076 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037084 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037369 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037389 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037404 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.038494 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.044271 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.044473 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.045475 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.045672 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.065369 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.165936 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.166604 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.166659 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.276013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.276163 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.296762 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.404805 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.965107 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.971984 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:33:47 crc kubenswrapper[4829]: I0217 16:33:47.720551 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerStarted","Data":"5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099"} Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.299637 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:48 crc kubenswrapper[4829]: E0217 16:33:48.300194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.731950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerStarted","Data":"17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d"} Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.752768 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" podStartSLOduration=2.200639235 podStartE2EDuration="2.752749053s" podCreationTimestamp="2026-02-17 16:33:46 +0000 UTC" firstStartedPulling="2026-02-17 16:33:46.971808652 +0000 UTC m=+2339.388826630" lastFinishedPulling="2026-02-17 16:33:47.52391847 +0000 UTC m=+2339.940936448" observedRunningTime="2026-02-17 16:33:48.747293696 +0000 UTC m=+2341.164311674" watchObservedRunningTime="2026-02-17 16:33:48.752749053 +0000 UTC m=+2341.169767031" Feb 17 16:33:49 crc kubenswrapper[4829]: E0217 16:33:49.282438 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:53 crc kubenswrapper[4829]: E0217 16:33:53.281401 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:01 crc kubenswrapper[4829]: I0217 16:34:01.280796 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:01 crc kubenswrapper[4829]: E0217 16:34:01.282016 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:03 crc kubenswrapper[4829]: E0217 16:34:03.281757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:06 crc kubenswrapper[4829]: E0217 16:34:06.282108 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:15 crc kubenswrapper[4829]: I0217 16:34:15.279981 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:15 crc kubenswrapper[4829]: E0217 16:34:15.280982 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:15 crc kubenswrapper[4829]: E0217 16:34:15.283528 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:19 crc kubenswrapper[4829]: E0217 16:34:19.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:28 crc kubenswrapper[4829]: E0217 16:34:28.297868 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:29 crc kubenswrapper[4829]: I0217 16:34:29.279867 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:29 crc kubenswrapper[4829]: E0217 16:34:29.280383 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:32 crc kubenswrapper[4829]: E0217 16:34:32.284993 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:41 crc kubenswrapper[4829]: E0217 16:34:41.281379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:43 crc kubenswrapper[4829]: I0217 16:34:43.279526 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:43 crc kubenswrapper[4829]: E0217 16:34:43.280111 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:44 crc kubenswrapper[4829]: E0217 16:34:44.282017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:53 crc kubenswrapper[4829]: E0217 16:34:53.282375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:56 crc kubenswrapper[4829]: I0217 16:34:56.282110 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:56 crc kubenswrapper[4829]: E0217 16:34:56.283808 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:56 crc kubenswrapper[4829]: E0217 16:34:56.287143 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:07 crc kubenswrapper[4829]: E0217 16:35:07.282495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:08 crc kubenswrapper[4829]: I0217 16:35:08.293980 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:08 crc kubenswrapper[4829]: E0217 16:35:08.295017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:08 crc kubenswrapper[4829]: E0217 16:35:08.298683 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:19 crc kubenswrapper[4829]: E0217 16:35:19.283727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:19 crc kubenswrapper[4829]: E0217 16:35:19.284377 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:22 crc kubenswrapper[4829]: I0217 16:35:22.279774 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:22 crc kubenswrapper[4829]: E0217 16:35:22.280629 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:30 crc kubenswrapper[4829]: E0217 16:35:30.282756 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:32 crc kubenswrapper[4829]: E0217 16:35:32.283950 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:33 crc kubenswrapper[4829]: I0217 16:35:33.279345 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:33 crc kubenswrapper[4829]: E0217 16:35:33.279711 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:44 crc kubenswrapper[4829]: I0217 16:35:44.279817 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:44 crc kubenswrapper[4829]: E0217 16:35:44.280802 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:44 crc kubenswrapper[4829]: E0217 16:35:44.282193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:46 crc kubenswrapper[4829]: E0217 16:35:46.284871 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:57 crc kubenswrapper[4829]: I0217 16:35:57.281286 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:57 crc kubenswrapper[4829]: E0217 16:35:57.282636 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:57 crc kubenswrapper[4829]: E0217 16:35:57.284262 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:59 crc kubenswrapper[4829]: E0217 16:35:59.284488 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:09 crc kubenswrapper[4829]: I0217 16:36:09.279750 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:09 crc kubenswrapper[4829]: E0217 16:36:09.280440 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:11 crc kubenswrapper[4829]: E0217 16:36:11.286030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:12 crc kubenswrapper[4829]: E0217 16:36:12.288480 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.600880 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.603775 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.634650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.751998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.752062 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.752086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854525 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854621 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854650 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.855017 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.875346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.924618 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:18 crc kubenswrapper[4829]: I0217 16:36:18.482994 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:18 crc kubenswrapper[4829]: I0217 16:36:18.585062 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"7a1f6e48924ecff9268477f3c718e0a5dbc385e04f2e313cd9042e7148b74cc2"} Feb 17 16:36:19 crc kubenswrapper[4829]: I0217 16:36:19.598814 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" exitCode=0 Feb 17 16:36:19 crc kubenswrapper[4829]: I0217 16:36:19.599042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689"} Feb 17 16:36:21 crc kubenswrapper[4829]: I0217 16:36:21.634737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} Feb 17 16:36:22 crc kubenswrapper[4829]: I0217 16:36:22.280844 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:22 crc kubenswrapper[4829]: E0217 16:36:22.281554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:22 crc kubenswrapper[4829]: E0217 16:36:22.284068 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:23 crc kubenswrapper[4829]: E0217 16:36:23.538232 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ab4c2e_a614_46f8_b7fc_259bacfeb8b4.slice/crio-dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:36:23 crc kubenswrapper[4829]: I0217 16:36:23.658397 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" exitCode=0 Feb 17 16:36:23 crc kubenswrapper[4829]: I0217 16:36:23.658442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} Feb 17 16:36:24 crc kubenswrapper[4829]: I0217 16:36:24.671821 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} Feb 17 16:36:24 crc kubenswrapper[4829]: I0217 16:36:24.705022 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgcfb" podStartSLOduration=3.145316321 podStartE2EDuration="7.704986867s" podCreationTimestamp="2026-02-17 16:36:17 +0000 UTC" firstStartedPulling="2026-02-17 16:36:19.601672734 +0000 UTC m=+2492.018690702" lastFinishedPulling="2026-02-17 16:36:24.16134327 +0000 UTC m=+2496.578361248" observedRunningTime="2026-02-17 16:36:24.692454298 +0000 UTC m=+2497.109472276" watchObservedRunningTime="2026-02-17 16:36:24.704986867 +0000 UTC m=+2497.122004855" Feb 17 16:36:26 crc kubenswrapper[4829]: E0217 16:36:26.282135 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.925339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.925823 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.988343 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:33 crc kubenswrapper[4829]: E0217 16:36:33.282930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:34 crc kubenswrapper[4829]: I0217 16:36:34.279630 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:34 crc kubenswrapper[4829]: E0217 16:36:34.279870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:37 crc kubenswrapper[4829]: I0217 16:36:37.997384 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:38 crc kubenswrapper[4829]: I0217 16:36:38.057370 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:38 crc kubenswrapper[4829]: I0217 16:36:38.822235 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgcfb" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" containerID="cri-o://c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" gracePeriod=2 Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.281746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.479065 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561199 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561546 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.564184 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities" (OuterVolumeSpecName: "utilities") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.576326 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs" (OuterVolumeSpecName: "kube-api-access-9nlqs") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "kube-api-access-9nlqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.611792 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664401 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664436 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664447 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836221 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" exitCode=0 Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836286 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836304 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"7a1f6e48924ecff9268477f3c718e0a5dbc385e04f2e313cd9042e7148b74cc2"} Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836337 4829 scope.go:117] "RemoveContainer" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.871632 4829 scope.go:117] "RemoveContainer" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.890837 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.907140 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.919931 4829 scope.go:117] "RemoveContainer" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.960997 4829 scope.go:117] "RemoveContainer" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.961355 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": container with ID starting with c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac not found: ID does not exist" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961397 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} err="failed to get container status \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": rpc error: code = NotFound desc = could not find container \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": container with ID starting with c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac not found: ID does not exist" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961421 4829 scope.go:117] "RemoveContainer" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.961853 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": container with ID starting with dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9 not found: ID does not exist" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961882 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} err="failed to get container status \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": rpc error: code = NotFound desc = could not find container \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": container with ID starting with dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9 not found: ID does not exist" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961901 4829 scope.go:117] "RemoveContainer" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.962366 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": container with ID starting with 11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689 not found: ID does not exist" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.962395 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689"} err="failed to get container status \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": rpc error: code = NotFound desc = could not find container \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": container with ID starting with 11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689 not found: ID does not exist" Feb 17 16:36:40 crc kubenswrapper[4829]: I0217 16:36:40.291593 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" path="/var/lib/kubelet/pods/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4/volumes" Feb 17 16:36:47 crc kubenswrapper[4829]: E0217 16:36:47.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:49 crc kubenswrapper[4829]: I0217 16:36:49.279727 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:49 crc kubenswrapper[4829]: E0217 16:36:49.280913 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:52 crc kubenswrapper[4829]: E0217 16:36:52.283273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:59 crc kubenswrapper[4829]: E0217 16:36:59.283651 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:04 crc kubenswrapper[4829]: I0217 16:37:04.280554 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:37:04 crc kubenswrapper[4829]: E0217 16:37:04.283320 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:05 crc kubenswrapper[4829]: I0217 16:37:05.133670 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} Feb 17 16:37:13 crc kubenswrapper[4829]: E0217 16:37:13.281724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:17 crc kubenswrapper[4829]: E0217 16:37:17.281524 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:24 crc kubenswrapper[4829]: E0217 16:37:24.283779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:30 crc kubenswrapper[4829]: E0217 16:37:30.281996 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:36 crc kubenswrapper[4829]: E0217 16:37:36.282727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:45 crc kubenswrapper[4829]: E0217 16:37:45.281951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:48 crc kubenswrapper[4829]: E0217 16:37:48.303891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:56 crc kubenswrapper[4829]: E0217 16:37:56.282197 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:59 crc kubenswrapper[4829]: E0217 16:37:59.286306 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:09 crc kubenswrapper[4829]: E0217 16:38:09.283789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130099 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130562 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130704 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.132615 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:20 crc kubenswrapper[4829]: E0217 16:38:20.283593 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:24 crc kubenswrapper[4829]: E0217 16:38:24.282700 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.407268 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.408190 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.408491 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.409807 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:38 crc kubenswrapper[4829]: E0217 16:38:38.306693 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:46 crc kubenswrapper[4829]: E0217 16:38:46.283714 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:51 crc kubenswrapper[4829]: E0217 16:38:51.283224 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:01 crc kubenswrapper[4829]: E0217 16:39:01.285208 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:06 crc kubenswrapper[4829]: E0217 16:39:06.282436 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.561992 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562760 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-utilities" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562785 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-utilities" Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562825 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-content" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562837 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-content" Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562883 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562896 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.563294 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.570754 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.613061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628231 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730374 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730427 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730852 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.755386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.893016 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.439319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763450 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" exitCode=0 Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763512 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482"} Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"2f5f9ac884b93c77a1abad82cb7157f8f7dddf20536b72ef99bb6974aee0fb66"} Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.765993 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:39:09 crc kubenswrapper[4829]: I0217 16:39:09.777000 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} Feb 17 16:39:10 crc kubenswrapper[4829]: I0217 16:39:10.799792 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" exitCode=0 Feb 17 16:39:10 crc kubenswrapper[4829]: I0217 16:39:10.800129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} Feb 17 16:39:12 crc kubenswrapper[4829]: I0217 16:39:12.829007 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} Feb 17 16:39:12 crc kubenswrapper[4829]: I0217 16:39:12.854664 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7c56n" podStartSLOduration=2.303761266 podStartE2EDuration="5.85464276s" podCreationTimestamp="2026-02-17 16:39:07 +0000 UTC" firstStartedPulling="2026-02-17 16:39:08.765447122 +0000 UTC m=+2661.182465100" lastFinishedPulling="2026-02-17 16:39:12.316328606 +0000 UTC m=+2664.733346594" observedRunningTime="2026-02-17 16:39:12.849364258 +0000 UTC m=+2665.266382246" watchObservedRunningTime="2026-02-17 16:39:12.85464276 +0000 UTC m=+2665.271660738" Feb 17 16:39:14 crc kubenswrapper[4829]: E0217 16:39:14.281327 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.893952 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.894684 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.957353 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:18 crc kubenswrapper[4829]: I0217 16:39:18.960981 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:19 crc kubenswrapper[4829]: I0217 16:39:19.025589 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:19 crc kubenswrapper[4829]: E0217 16:39:19.283007 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:20 crc kubenswrapper[4829]: I0217 16:39:20.910950 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7c56n" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" containerID="cri-o://085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" gracePeriod=2 Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.479767 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601931 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.603401 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities" (OuterVolumeSpecName: "utilities") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.608860 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz" (OuterVolumeSpecName: "kube-api-access-dqnxz") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "kube-api-access-dqnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.644325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705636 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705684 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705699 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926216 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" exitCode=0 Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926303 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926324 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926382 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"2f5f9ac884b93c77a1abad82cb7157f8f7dddf20536b72ef99bb6974aee0fb66"} Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926401 4829 scope.go:117] "RemoveContainer" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.967653 4829 scope.go:117] "RemoveContainer" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.996829 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.004841 4829 scope.go:117] "RemoveContainer" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.017406 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.071921 4829 scope.go:117] "RemoveContainer" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.072542 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": container with ID starting with 085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557 not found: ID does not exist" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.072642 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} err="failed to get container status \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": rpc error: code = NotFound desc = could not find container \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": container with ID starting with 085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.072691 4829 scope.go:117] "RemoveContainer" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.073228 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": container with ID starting with bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3 not found: ID does not exist" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073273 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} err="failed to get container status \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": rpc error: code = NotFound desc = could not find container \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": container with ID starting with bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073302 4829 scope.go:117] "RemoveContainer" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.073684 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": container with ID starting with fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482 not found: ID does not exist" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073810 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482"} err="failed to get container status \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": rpc error: code = NotFound desc = could not find container \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": container with ID starting with fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.309906 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" path="/var/lib/kubelet/pods/d2f1183e-fedb-40ba-83b4-9ae43daefc72/volumes" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.425329 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.425422 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:25 crc kubenswrapper[4829]: E0217 16:39:25.283265 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:33 crc kubenswrapper[4829]: E0217 16:39:33.282008 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:39 crc kubenswrapper[4829]: E0217 16:39:39.282743 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:45 crc kubenswrapper[4829]: E0217 16:39:45.283540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:52 crc kubenswrapper[4829]: I0217 16:39:52.424266 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:52 crc kubenswrapper[4829]: I0217 16:39:52.424911 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:54 crc kubenswrapper[4829]: E0217 16:39:54.283089 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:00 crc kubenswrapper[4829]: E0217 16:40:00.282649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:03 crc kubenswrapper[4829]: I0217 16:40:03.473974 4829 generic.go:334] "Generic (PLEG): container finished" podID="30690071-6fc2-4647-82c0-6e5234005aec" containerID="17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d" exitCode=2 Feb 17 16:40:03 crc kubenswrapper[4829]: I0217 16:40:03.474371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerDied","Data":"17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d"} Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.121530 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.170717 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.170809 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.171073 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.185891 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn" (OuterVolumeSpecName: "kube-api-access-vgbsn") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "kube-api-access-vgbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.208805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory" (OuterVolumeSpecName: "inventory") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.233445 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274684 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274740 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274753 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499085 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerDied","Data":"5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099"} Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499122 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499157 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:40:06 crc kubenswrapper[4829]: E0217 16:40:06.284353 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:11 crc kubenswrapper[4829]: E0217 16:40:11.282866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:20 crc kubenswrapper[4829]: E0217 16:40:20.281555 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.424790 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.425419 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.425478 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.426560 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.426693 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" gracePeriod=600 Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746428 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" exitCode=0 Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746510 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746855 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.033802 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034383 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034409 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034478 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034488 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034498 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-content" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034505 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-content" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034525 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-utilities" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034533 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-utilities" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034822 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034862 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.035844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039194 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.040372 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.050195 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170540 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272782 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.279289 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.286121 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.293912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.355815 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.763500 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.911110 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: W0217 16:40:23.915536 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0fd9f61_596b_4ef3_b6da_6ebe6b04d497.slice/crio-a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3 WatchSource:0}: Error finding container a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3: Status 404 returned error can't find the container with id a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3 Feb 17 16:40:24 crc kubenswrapper[4829]: E0217 16:40:24.282685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.778165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerStarted","Data":"567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71"} Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.779654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerStarted","Data":"a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3"} Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.804727 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" podStartSLOduration=1.286118851 podStartE2EDuration="1.804707494s" podCreationTimestamp="2026-02-17 16:40:23 +0000 UTC" firstStartedPulling="2026-02-17 16:40:23.918489701 +0000 UTC m=+2736.335507679" lastFinishedPulling="2026-02-17 16:40:24.437078334 +0000 UTC m=+2736.854096322" observedRunningTime="2026-02-17 16:40:24.802295059 +0000 UTC m=+2737.219313037" watchObservedRunningTime="2026-02-17 16:40:24.804707494 +0000 UTC m=+2737.221725472" Feb 17 16:40:33 crc kubenswrapper[4829]: E0217 16:40:33.283787 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:35 crc kubenswrapper[4829]: E0217 16:40:35.297155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:48 crc kubenswrapper[4829]: E0217 16:40:48.294765 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:48 crc kubenswrapper[4829]: E0217 16:40:48.295518 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:59 crc kubenswrapper[4829]: E0217 16:40:59.281704 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:00 crc kubenswrapper[4829]: E0217 16:41:00.283894 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:12 crc kubenswrapper[4829]: E0217 16:41:12.282175 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:15 crc kubenswrapper[4829]: E0217 16:41:15.281599 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:24 crc kubenswrapper[4829]: E0217 16:41:24.281618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.564241 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.569106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.580012 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760076 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760260 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760956 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.783091 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.895760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:26 crc kubenswrapper[4829]: I0217 16:41:26.461727 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:26 crc kubenswrapper[4829]: I0217 16:41:26.525809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"531071b097d235504f97e76bdf7dd4e2670ea82dee119089f6be91830c6db602"} Feb 17 16:41:27 crc kubenswrapper[4829]: I0217 16:41:27.535914 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750" exitCode=0 Feb 17 16:41:27 crc kubenswrapper[4829]: I0217 16:41:27.536112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750"} Feb 17 16:41:28 crc kubenswrapper[4829]: I0217 16:41:28.551513 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d"} Feb 17 16:41:30 crc kubenswrapper[4829]: E0217 16:41:30.286115 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:30 crc kubenswrapper[4829]: I0217 16:41:30.575140 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d" exitCode=0 Feb 17 16:41:30 crc kubenswrapper[4829]: I0217 16:41:30.575175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d"} Feb 17 16:41:31 crc kubenswrapper[4829]: I0217 16:41:31.586120 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f"} Feb 17 16:41:31 crc kubenswrapper[4829]: I0217 16:41:31.615228 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdh9b" podStartSLOduration=3.179505795 podStartE2EDuration="6.615208021s" podCreationTimestamp="2026-02-17 16:41:25 +0000 UTC" firstStartedPulling="2026-02-17 16:41:27.538492569 +0000 UTC m=+2799.955510547" lastFinishedPulling="2026-02-17 16:41:30.974194795 +0000 UTC m=+2803.391212773" observedRunningTime="2026-02-17 16:41:31.605828469 +0000 UTC m=+2804.022846457" watchObservedRunningTime="2026-02-17 16:41:31.615208021 +0000 UTC m=+2804.032225999" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.932623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.935937 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.953455 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970569 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970824 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970873 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.073899 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074018 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.075018 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.101536 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.270367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.869895 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:33 crc kubenswrapper[4829]: W0217 16:41:33.871161 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcafaefdf_5318_4146_bf8f_f2e8d5d83ec6.slice/crio-8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9 WatchSource:0}: Error finding container 8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9: Status 404 returned error can't find the container with id 8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9 Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641100 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" exitCode=0 Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586"} Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641489 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9"} Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.896490 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.896938 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.949606 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:36 crc kubenswrapper[4829]: E0217 16:41:36.283719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:36 crc kubenswrapper[4829]: I0217 16:41:36.729105 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:37 crc kubenswrapper[4829]: I0217 16:41:37.125643 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:38 crc kubenswrapper[4829]: I0217 16:41:38.690567 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} Feb 17 16:41:38 crc kubenswrapper[4829]: I0217 16:41:38.691455 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdh9b" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" containerID="cri-o://a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" gracePeriod=2 Feb 17 16:41:39 crc kubenswrapper[4829]: I0217 16:41:39.703898 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" exitCode=0 Feb 17 16:41:39 crc kubenswrapper[4829]: I0217 16:41:39.703955 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f"} Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.204435 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285646 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285711 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.286773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities" (OuterVolumeSpecName: "utilities") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.304399 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x" (OuterVolumeSpecName: "kube-api-access-4vk9x") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "kube-api-access-4vk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.342454 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.388102 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.389074 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.389204 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"531071b097d235504f97e76bdf7dd4e2670ea82dee119089f6be91830c6db602"} Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717549 4829 scope.go:117] "RemoveContainer" containerID="a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717604 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.763614 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.777955 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.846799 4829 scope.go:117] "RemoveContainer" containerID="1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.881632 4829 scope.go:117] "RemoveContainer" containerID="ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750" Feb 17 16:41:42 crc kubenswrapper[4829]: I0217 16:41:42.293343 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" path="/var/lib/kubelet/pods/939a62be-82dd-4a76-9dc2-8fbadadc3739/volumes" Feb 17 16:41:43 crc kubenswrapper[4829]: E0217 16:41:43.280649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:45 crc kubenswrapper[4829]: I0217 16:41:45.794425 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" exitCode=0 Feb 17 16:41:45 crc kubenswrapper[4829]: I0217 16:41:45.795639 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} Feb 17 16:41:49 crc kubenswrapper[4829]: E0217 16:41:49.281649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:55 crc kubenswrapper[4829]: E0217 16:41:55.429930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:56 crc kubenswrapper[4829]: I0217 16:41:56.962809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} Feb 17 16:41:56 crc kubenswrapper[4829]: I0217 16:41:56.990207 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lg2b5" podStartSLOduration=4.204083472 podStartE2EDuration="24.990180678s" podCreationTimestamp="2026-02-17 16:41:32 +0000 UTC" firstStartedPulling="2026-02-17 16:41:34.645243213 +0000 UTC m=+2807.062261191" lastFinishedPulling="2026-02-17 16:41:55.431340419 +0000 UTC m=+2827.848358397" observedRunningTime="2026-02-17 16:41:56.979649235 +0000 UTC m=+2829.396667213" watchObservedRunningTime="2026-02-17 16:41:56.990180678 +0000 UTC m=+2829.407198676" Feb 17 16:42:03 crc kubenswrapper[4829]: I0217 16:42:03.271083 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:03 crc kubenswrapper[4829]: I0217 16:42:03.271941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:03 crc kubenswrapper[4829]: E0217 16:42:03.283246 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:04 crc kubenswrapper[4829]: I0217 16:42:04.325089 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lg2b5" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" probeResult="failure" output=< Feb 17 16:42:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:42:04 crc kubenswrapper[4829]: > Feb 17 16:42:10 crc kubenswrapper[4829]: E0217 16:42:10.282466 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.336017 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.402419 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.586851 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:15 crc kubenswrapper[4829]: I0217 16:42:15.177303 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lg2b5" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" containerID="cri-o://ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" gracePeriod=2 Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:15.843128 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.023568 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024240 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024564 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities" (OuterVolumeSpecName: "utilities") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.025056 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.045962 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr" (OuterVolumeSpecName: "kube-api-access-fpqlr") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "kube-api-access-fpqlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.129254 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.166512 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190525 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" exitCode=0 Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190566 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190616 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9"} Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190637 4829 scope.go:117] "RemoveContainer" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190785 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.220747 4829 scope.go:117] "RemoveContainer" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.237154 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.239061 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.263417 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.267722 4829 scope.go:117] "RemoveContainer" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.280854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.294969 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" path="/var/lib/kubelet/pods/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6/volumes" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.321794 4829 scope.go:117] "RemoveContainer" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.322311 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": container with ID starting with ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8 not found: ID does not exist" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322377 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} err="failed to get container status \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": rpc error: code = NotFound desc = could not find container \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": container with ID starting with ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8 not found: ID does not exist" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322407 4829 scope.go:117] "RemoveContainer" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.322931 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": container with ID starting with 1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251 not found: ID does not exist" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322978 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} err="failed to get container status \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": rpc error: code = NotFound desc = could not find container \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": container with ID starting with 1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251 not found: ID does not exist" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.323006 4829 scope.go:117] "RemoveContainer" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.323404 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": container with ID starting with 360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586 not found: ID does not exist" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.323440 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586"} err="failed to get container status \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": rpc error: code = NotFound desc = could not find container \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": container with ID starting with 360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586 not found: ID does not exist" Feb 17 16:42:21 crc kubenswrapper[4829]: E0217 16:42:21.283546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:22 crc kubenswrapper[4829]: I0217 16:42:22.424969 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:22 crc kubenswrapper[4829]: I0217 16:42:22.425254 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:27 crc kubenswrapper[4829]: E0217 16:42:27.282945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:33 crc kubenswrapper[4829]: E0217 16:42:33.282528 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:42 crc kubenswrapper[4829]: E0217 16:42:42.282037 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:48 crc kubenswrapper[4829]: E0217 16:42:48.282168 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:52 crc kubenswrapper[4829]: I0217 16:42:52.440971 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:52 crc kubenswrapper[4829]: I0217 16:42:52.441751 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:55 crc kubenswrapper[4829]: E0217 16:42:55.281339 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:59 crc kubenswrapper[4829]: E0217 16:42:59.284252 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:07 crc kubenswrapper[4829]: E0217 16:43:07.283052 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:11 crc kubenswrapper[4829]: E0217 16:43:11.280928 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.406942 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.407399 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.407526 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.409327 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.424798 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.424896 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.425670 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.426556 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.426705 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" gracePeriod=600 Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.556537 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994285 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" exitCode=0 Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994335 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994372 4829 scope.go:117] "RemoveContainer" containerID="9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.995096 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.996010 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:23 crc kubenswrapper[4829]: E0217 16:43:23.281489 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:36 crc kubenswrapper[4829]: I0217 16:43:36.281543 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:36 crc kubenswrapper[4829]: E0217 16:43:36.282389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.283382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412120 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412190 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412332 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.414149 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:48 crc kubenswrapper[4829]: E0217 16:43:48.284706 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:49 crc kubenswrapper[4829]: I0217 16:43:49.279237 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:49 crc kubenswrapper[4829]: E0217 16:43:49.279789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:49 crc kubenswrapper[4829]: E0217 16:43:49.281102 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:00 crc kubenswrapper[4829]: I0217 16:44:00.279723 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:00 crc kubenswrapper[4829]: E0217 16:44:00.280587 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:01 crc kubenswrapper[4829]: E0217 16:44:01.281161 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:03 crc kubenswrapper[4829]: E0217 16:44:03.282967 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:12 crc kubenswrapper[4829]: E0217 16:44:12.283819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:14 crc kubenswrapper[4829]: I0217 16:44:14.279902 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:14 crc kubenswrapper[4829]: E0217 16:44:14.281152 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:16 crc kubenswrapper[4829]: E0217 16:44:16.282519 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:25 crc kubenswrapper[4829]: E0217 16:44:25.282533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:28 crc kubenswrapper[4829]: I0217 16:44:28.289462 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:28 crc kubenswrapper[4829]: E0217 16:44:28.290351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:31 crc kubenswrapper[4829]: E0217 16:44:31.281280 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:37 crc kubenswrapper[4829]: E0217 16:44:37.283282 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:43 crc kubenswrapper[4829]: I0217 16:44:43.279603 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:43 crc kubenswrapper[4829]: E0217 16:44:43.280387 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:43 crc kubenswrapper[4829]: E0217 16:44:43.282286 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:49 crc kubenswrapper[4829]: E0217 16:44:49.282048 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:54 crc kubenswrapper[4829]: I0217 16:44:54.279765 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:54 crc kubenswrapper[4829]: E0217 16:44:54.280582 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:55 crc kubenswrapper[4829]: E0217 16:44:55.282495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.174792 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175884 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175901 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175917 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175924 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175948 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175977 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175985 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.176009 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176016 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.176033 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176041 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176309 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176359 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.177376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.180673 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.183560 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.190313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315403 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315591 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315621 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418551 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.419560 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.424291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.435089 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.500948 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.967318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: W0217 16:45:00.980251 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ddee5a9_0539_4387_8a52_5a41ca147e35.slice/crio-8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd WatchSource:0}: Error finding container 8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd: Status 404 returned error can't find the container with id 8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.190411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerStarted","Data":"1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2"} Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.190450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerStarted","Data":"8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd"} Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.216525 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" podStartSLOduration=1.216507446 podStartE2EDuration="1.216507446s" podCreationTimestamp="2026-02-17 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:45:01.205958995 +0000 UTC m=+3013.622976973" watchObservedRunningTime="2026-02-17 16:45:01.216507446 +0000 UTC m=+3013.633525424" Feb 17 16:45:02 crc kubenswrapper[4829]: I0217 16:45:02.206265 4829 generic.go:334] "Generic (PLEG): container finished" podID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerID="1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2" exitCode=0 Feb 17 16:45:02 crc kubenswrapper[4829]: I0217 16:45:02.206361 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerDied","Data":"1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2"} Feb 17 16:45:03 crc kubenswrapper[4829]: E0217 16:45:03.281551 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.659806 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819707 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819802 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.820447 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.820961 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.825777 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574" (OuterVolumeSpecName: "kube-api-access-dn574") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "kube-api-access-dn574". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.830760 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.922764 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.922793 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229801 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerDied","Data":"8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd"} Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229876 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229966 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.305966 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.318147 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:45:06 crc kubenswrapper[4829]: E0217 16:45:06.281058 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:06 crc kubenswrapper[4829]: I0217 16:45:06.293993 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" path="/var/lib/kubelet/pods/5695ec4a-a69a-4e62-9ddd-c9cea43413a9/volumes" Feb 17 16:45:07 crc kubenswrapper[4829]: I0217 16:45:07.280032 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:07 crc kubenswrapper[4829]: E0217 16:45:07.280692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:17 crc kubenswrapper[4829]: E0217 16:45:17.284093 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:18 crc kubenswrapper[4829]: E0217 16:45:18.293185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:18 crc kubenswrapper[4829]: I0217 16:45:18.898683 4829 scope.go:117] "RemoveContainer" containerID="389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d" Feb 17 16:45:19 crc kubenswrapper[4829]: I0217 16:45:19.279622 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:19 crc kubenswrapper[4829]: E0217 16:45:19.280026 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:31 crc kubenswrapper[4829]: E0217 16:45:31.281762 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:31 crc kubenswrapper[4829]: E0217 16:45:31.281840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:34 crc kubenswrapper[4829]: I0217 16:45:34.279432 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:34 crc kubenswrapper[4829]: E0217 16:45:34.280059 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:42 crc kubenswrapper[4829]: E0217 16:45:42.283695 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:45 crc kubenswrapper[4829]: I0217 16:45:45.279912 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:45 crc kubenswrapper[4829]: E0217 16:45:45.280499 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:46 crc kubenswrapper[4829]: E0217 16:45:46.281292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:53 crc kubenswrapper[4829]: E0217 16:45:53.282398 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:57 crc kubenswrapper[4829]: I0217 16:45:57.280533 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:57 crc kubenswrapper[4829]: E0217 16:45:57.281187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:01 crc kubenswrapper[4829]: E0217 16:46:01.284276 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:07 crc kubenswrapper[4829]: E0217 16:46:07.281495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:09 crc kubenswrapper[4829]: I0217 16:46:09.279602 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:09 crc kubenswrapper[4829]: E0217 16:46:09.280455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:15 crc kubenswrapper[4829]: E0217 16:46:15.282754 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:20 crc kubenswrapper[4829]: I0217 16:46:20.280138 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:20 crc kubenswrapper[4829]: E0217 16:46:20.281071 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:22 crc kubenswrapper[4829]: E0217 16:46:22.281330 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.543706 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:22 crc kubenswrapper[4829]: E0217 16:46:22.544342 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.544369 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.544715 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.549450 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.558879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678167 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678605 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678860 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.780830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.780956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781604 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.802275 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.877050 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:23 crc kubenswrapper[4829]: I0217 16:46:23.470626 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087526 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" exitCode=0 Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b"} Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"b837a4f7d0720eda0be84215e50b60a7a3dc027a4e3757bb03a0162d743b5e59"} Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.090780 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:46:25 crc kubenswrapper[4829]: I0217 16:46:25.100630 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} Feb 17 16:46:27 crc kubenswrapper[4829]: I0217 16:46:27.122918 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" exitCode=0 Feb 17 16:46:27 crc kubenswrapper[4829]: I0217 16:46:27.122996 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} Feb 17 16:46:28 crc kubenswrapper[4829]: I0217 16:46:28.145982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} Feb 17 16:46:28 crc kubenswrapper[4829]: I0217 16:46:28.168263 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psxcg" podStartSLOduration=2.694145655 podStartE2EDuration="6.168246708s" podCreationTimestamp="2026-02-17 16:46:22 +0000 UTC" firstStartedPulling="2026-02-17 16:46:24.090311455 +0000 UTC m=+3096.507329463" lastFinishedPulling="2026-02-17 16:46:27.564412538 +0000 UTC m=+3099.981430516" observedRunningTime="2026-02-17 16:46:28.164898469 +0000 UTC m=+3100.581916447" watchObservedRunningTime="2026-02-17 16:46:28.168246708 +0000 UTC m=+3100.585264686" Feb 17 16:46:29 crc kubenswrapper[4829]: E0217 16:46:29.281687 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:31 crc kubenswrapper[4829]: I0217 16:46:31.279585 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:31 crc kubenswrapper[4829]: E0217 16:46:31.281272 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.877341 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.877729 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.931854 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:33 crc kubenswrapper[4829]: I0217 16:46:33.258409 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:33 crc kubenswrapper[4829]: I0217 16:46:33.311654 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.230483 4829 generic.go:334] "Generic (PLEG): container finished" podID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerID="567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71" exitCode=2 Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.230916 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-psxcg" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" containerID="cri-o://b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" gracePeriod=2 Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.231214 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerDied","Data":"567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71"} Feb 17 16:46:35 crc kubenswrapper[4829]: E0217 16:46:35.280874 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.852858 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.908882 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.909191 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.909340 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.910554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities" (OuterVolumeSpecName: "utilities") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.915129 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp" (OuterVolumeSpecName: "kube-api-access-q85tp") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "kube-api-access-q85tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.965213 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013463 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013523 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013545 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248550 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" exitCode=0 Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248693 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248750 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"b837a4f7d0720eda0be84215e50b60a7a3dc027a4e3757bb03a0162d743b5e59"} Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248847 4829 scope.go:117] "RemoveContainer" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.288619 4829 scope.go:117] "RemoveContainer" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.331628 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.332781 4829 scope.go:117] "RemoveContainer" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.346073 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.399146 4829 scope.go:117] "RemoveContainer" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.402168 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": container with ID starting with b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb not found: ID does not exist" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402201 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} err="failed to get container status \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": rpc error: code = NotFound desc = could not find container \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": container with ID starting with b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402226 4829 scope.go:117] "RemoveContainer" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.402658 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": container with ID starting with 093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f not found: ID does not exist" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402704 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} err="failed to get container status \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": rpc error: code = NotFound desc = could not find container \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": container with ID starting with 093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402732 4829 scope.go:117] "RemoveContainer" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.403026 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": container with ID starting with 8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b not found: ID does not exist" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.403074 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b"} err="failed to get container status \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": rpc error: code = NotFound desc = could not find container \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": container with ID starting with 8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.789883 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945250 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945321 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945379 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.951774 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb" (OuterVolumeSpecName: "kube-api-access-24dqb") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "kube-api-access-24dqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.987623 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory" (OuterVolumeSpecName: "inventory") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.015706 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049272 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049329 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049343 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.262997 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerDied","Data":"a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3"} Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.263086 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.263029 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:46:38 crc kubenswrapper[4829]: I0217 16:46:38.308152 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" path="/var/lib/kubelet/pods/39b694ae-4f43-4017-a530-197ed7e3a433/volumes" Feb 17 16:46:41 crc kubenswrapper[4829]: E0217 16:46:41.282374 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:46 crc kubenswrapper[4829]: I0217 16:46:46.279928 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:46 crc kubenswrapper[4829]: E0217 16:46:46.280792 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:49 crc kubenswrapper[4829]: E0217 16:46:49.285072 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:53 crc kubenswrapper[4829]: E0217 16:46:53.282355 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:01 crc kubenswrapper[4829]: I0217 16:47:01.281238 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:01 crc kubenswrapper[4829]: E0217 16:47:01.282373 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:02 crc kubenswrapper[4829]: E0217 16:47:02.283345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:08 crc kubenswrapper[4829]: E0217 16:47:08.293944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.048423 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049230 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-utilities" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049245 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-utilities" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049281 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049289 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049327 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049334 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049352 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-content" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049360 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-content" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049666 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049688 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.050676 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.056466 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.056797 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.058871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.059007 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.065720 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161344 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161404 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264271 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.270652 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.270834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.281728 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.282142 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.283262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.283463 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.371368 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.986279 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:15 crc kubenswrapper[4829]: I0217 16:47:15.683633 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerStarted","Data":"98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083"} Feb 17 16:47:16 crc kubenswrapper[4829]: I0217 16:47:16.700102 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerStarted","Data":"2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946"} Feb 17 16:47:16 crc kubenswrapper[4829]: I0217 16:47:16.716392 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" podStartSLOduration=2.277938567 podStartE2EDuration="2.716376138s" podCreationTimestamp="2026-02-17 16:47:14 +0000 UTC" firstStartedPulling="2026-02-17 16:47:14.987008571 +0000 UTC m=+3147.404026569" lastFinishedPulling="2026-02-17 16:47:15.425446122 +0000 UTC m=+3147.842464140" observedRunningTime="2026-02-17 16:47:16.713907132 +0000 UTC m=+3149.130925110" watchObservedRunningTime="2026-02-17 16:47:16.716376138 +0000 UTC m=+3149.133394116" Feb 17 16:47:20 crc kubenswrapper[4829]: E0217 16:47:20.283946 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:26 crc kubenswrapper[4829]: I0217 16:47:26.280973 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:26 crc kubenswrapper[4829]: E0217 16:47:26.281949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:27 crc kubenswrapper[4829]: E0217 16:47:27.282757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:32 crc kubenswrapper[4829]: E0217 16:47:32.281944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:38 crc kubenswrapper[4829]: I0217 16:47:38.289156 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:38 crc kubenswrapper[4829]: E0217 16:47:38.290036 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:39 crc kubenswrapper[4829]: E0217 16:47:39.281609 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:46 crc kubenswrapper[4829]: E0217 16:47:46.281766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:52 crc kubenswrapper[4829]: I0217 16:47:52.279992 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:52 crc kubenswrapper[4829]: E0217 16:47:52.280699 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:53 crc kubenswrapper[4829]: E0217 16:47:53.281118 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:59 crc kubenswrapper[4829]: E0217 16:47:59.281522 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:04 crc kubenswrapper[4829]: I0217 16:48:04.279891 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:04 crc kubenswrapper[4829]: E0217 16:48:04.281606 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:48:07 crc kubenswrapper[4829]: E0217 16:48:07.283067 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:13 crc kubenswrapper[4829]: E0217 16:48:13.282840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:17 crc kubenswrapper[4829]: I0217 16:48:17.279772 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:17 crc kubenswrapper[4829]: E0217 16:48:17.280667 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:48:19 crc kubenswrapper[4829]: E0217 16:48:19.283137 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421097 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421566 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421696 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.422934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:31 crc kubenswrapper[4829]: I0217 16:48:31.279736 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:31 crc kubenswrapper[4829]: E0217 16:48:31.283125 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:32 crc kubenswrapper[4829]: I0217 16:48:32.500365 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} Feb 17 16:48:38 crc kubenswrapper[4829]: E0217 16:48:38.288782 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.414825 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.415333 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.415488 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.416658 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:53 crc kubenswrapper[4829]: E0217 16:48:53.286557 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:58 crc kubenswrapper[4829]: E0217 16:48:58.289430 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:07 crc kubenswrapper[4829]: E0217 16:49:07.282348 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:10 crc kubenswrapper[4829]: E0217 16:49:10.281828 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.504679 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.509079 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.515010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652726 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652776 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756201 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.757205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.757687 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.967976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:15 crc kubenswrapper[4829]: I0217 16:49:15.161734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:15 crc kubenswrapper[4829]: I0217 16:49:15.692358 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006868 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" exitCode=0 Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006909 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899"} Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006936 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"b0f4e9aeceebcf8cb08d563b4cc1f0bd60551e4b6fabf6f07540dcc2ec4d3d42"} Feb 17 16:49:17 crc kubenswrapper[4829]: I0217 16:49:17.021711 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} Feb 17 16:49:19 crc kubenswrapper[4829]: I0217 16:49:19.047036 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" exitCode=0 Feb 17 16:49:19 crc kubenswrapper[4829]: I0217 16:49:19.047474 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} Feb 17 16:49:20 crc kubenswrapper[4829]: I0217 16:49:20.063567 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} Feb 17 16:49:20 crc kubenswrapper[4829]: I0217 16:49:20.096401 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mg6dh" podStartSLOduration=2.660681897 podStartE2EDuration="6.0963716s" podCreationTimestamp="2026-02-17 16:49:14 +0000 UTC" firstStartedPulling="2026-02-17 16:49:16.010697622 +0000 UTC m=+3268.427715600" lastFinishedPulling="2026-02-17 16:49:19.446387325 +0000 UTC m=+3271.863405303" observedRunningTime="2026-02-17 16:49:20.082156577 +0000 UTC m=+3272.499174575" watchObservedRunningTime="2026-02-17 16:49:20.0963716 +0000 UTC m=+3272.513389598" Feb 17 16:49:22 crc kubenswrapper[4829]: E0217 16:49:22.281645 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:24 crc kubenswrapper[4829]: E0217 16:49:24.281981 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.162411 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.162515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.221955 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:26 crc kubenswrapper[4829]: I0217 16:49:26.191860 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:26 crc kubenswrapper[4829]: I0217 16:49:26.241294 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.154489 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mg6dh" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" containerID="cri-o://44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" gracePeriod=2 Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.721110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.801713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.801966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.802020 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.803142 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities" (OuterVolumeSpecName: "utilities") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.808973 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx" (OuterVolumeSpecName: "kube-api-access-tnfbx") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "kube-api-access-tnfbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.842985 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905844 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905908 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905927 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168597 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" exitCode=0 Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168671 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.169126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"b0f4e9aeceebcf8cb08d563b4cc1f0bd60551e4b6fabf6f07540dcc2ec4d3d42"} Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.169144 4829 scope.go:117] "RemoveContainer" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.200813 4829 scope.go:117] "RemoveContainer" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.205245 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.225697 4829 scope.go:117] "RemoveContainer" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.227070 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.293844 4829 scope.go:117] "RemoveContainer" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.295892 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": container with ID starting with 44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24 not found: ID does not exist" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.295934 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} err="failed to get container status \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": rpc error: code = NotFound desc = could not find container \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": container with ID starting with 44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24 not found: ID does not exist" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.295970 4829 scope.go:117] "RemoveContainer" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.296740 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": container with ID starting with bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7 not found: ID does not exist" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.296786 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} err="failed to get container status \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": rpc error: code = NotFound desc = could not find container \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": container with ID starting with bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7 not found: ID does not exist" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.296815 4829 scope.go:117] "RemoveContainer" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.297181 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": container with ID starting with dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899 not found: ID does not exist" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.297227 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899"} err="failed to get container status \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": rpc error: code = NotFound desc = could not find container \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": container with ID starting with dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899 not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4829]: I0217 16:49:30.303864 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" path="/var/lib/kubelet/pods/3f0f0f09-269b-4977-9cf6-5c5cb72ec856/volumes" Feb 17 16:49:35 crc kubenswrapper[4829]: E0217 16:49:35.281848 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:35 crc kubenswrapper[4829]: E0217 16:49:35.281863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:48 crc kubenswrapper[4829]: E0217 16:49:48.292035 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:50 crc kubenswrapper[4829]: E0217 16:49:50.282682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:00 crc kubenswrapper[4829]: E0217 16:50:00.281726 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:05 crc kubenswrapper[4829]: E0217 16:50:05.282685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:15 crc kubenswrapper[4829]: E0217 16:50:15.291862 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:18 crc kubenswrapper[4829]: E0217 16:50:18.291012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:27 crc kubenswrapper[4829]: E0217 16:50:27.281702 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:29 crc kubenswrapper[4829]: E0217 16:50:29.284746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:39 crc kubenswrapper[4829]: E0217 16:50:39.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:42 crc kubenswrapper[4829]: E0217 16:50:42.282323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:52 crc kubenswrapper[4829]: I0217 16:50:52.424550 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:52 crc kubenswrapper[4829]: I0217 16:50:52.425059 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:50:54 crc kubenswrapper[4829]: E0217 16:50:54.283883 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:54 crc kubenswrapper[4829]: E0217 16:50:54.283910 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:05 crc kubenswrapper[4829]: E0217 16:51:05.282014 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:08 crc kubenswrapper[4829]: E0217 16:51:08.295324 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:17 crc kubenswrapper[4829]: E0217 16:51:17.281934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:20 crc kubenswrapper[4829]: E0217 16:51:20.282297 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:22 crc kubenswrapper[4829]: I0217 16:51:22.424954 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:22 crc kubenswrapper[4829]: I0217 16:51:22.428823 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:31 crc kubenswrapper[4829]: E0217 16:51:31.281568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:32 crc kubenswrapper[4829]: E0217 16:51:32.281526 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:45 crc kubenswrapper[4829]: E0217 16:51:45.281757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:46 crc kubenswrapper[4829]: E0217 16:51:46.281414 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.424424 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.425122 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.425182 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.426300 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.426383 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" gracePeriod=600 Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.766522 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" exitCode=0 Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.767045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.767076 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:51:53 crc kubenswrapper[4829]: I0217 16:51:53.778173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} Feb 17 16:51:56 crc kubenswrapper[4829]: E0217 16:51:56.282298 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.759218 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760537 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-utilities" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760557 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-utilities" Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760603 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-content" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760613 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-content" Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760631 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760962 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.763207 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.769321 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.930671 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.931010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.931080 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.032970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033703 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033791 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.055832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.088113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.697387 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:00 crc kubenswrapper[4829]: W0217 16:52:00.719857 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60601378_20f1_4f29_a22b_0b6dfbc118a1.slice/crio-8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd WatchSource:0}: Error finding container 8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd: Status 404 returned error can't find the container with id 8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.853405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd"} Feb 17 16:52:01 crc kubenswrapper[4829]: E0217 16:52:01.280792 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.869949 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" exitCode=0 Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.870045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e"} Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.872963 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:52:02 crc kubenswrapper[4829]: I0217 16:52:02.884435 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} Feb 17 16:52:05 crc kubenswrapper[4829]: I0217 16:52:05.917485 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" exitCode=0 Feb 17 16:52:05 crc kubenswrapper[4829]: I0217 16:52:05.917559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} Feb 17 16:52:07 crc kubenswrapper[4829]: I0217 16:52:07.939732 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} Feb 17 16:52:07 crc kubenswrapper[4829]: I0217 16:52:07.968846 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mlm9r" podStartSLOduration=3.519773888 podStartE2EDuration="8.968829693s" podCreationTimestamp="2026-02-17 16:51:59 +0000 UTC" firstStartedPulling="2026-02-17 16:52:01.872694941 +0000 UTC m=+3434.289712919" lastFinishedPulling="2026-02-17 16:52:07.321750736 +0000 UTC m=+3439.738768724" observedRunningTime="2026-02-17 16:52:07.962186313 +0000 UTC m=+3440.379204301" watchObservedRunningTime="2026-02-17 16:52:07.968829693 +0000 UTC m=+3440.385847671" Feb 17 16:52:09 crc kubenswrapper[4829]: E0217 16:52:09.281379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:10 crc kubenswrapper[4829]: I0217 16:52:10.088381 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:10 crc kubenswrapper[4829]: I0217 16:52:10.088670 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:11 crc kubenswrapper[4829]: I0217 16:52:11.132679 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mlm9r" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:52:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:52:11 crc kubenswrapper[4829]: > Feb 17 16:52:16 crc kubenswrapper[4829]: E0217 16:52:16.284141 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.143879 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.206400 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.384614 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:21 crc kubenswrapper[4829]: E0217 16:52:21.281688 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.094129 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mlm9r" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" containerID="cri-o://ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" gracePeriod=2 Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.609982 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741057 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741106 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.743005 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities" (OuterVolumeSpecName: "utilities") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.763289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5" (OuterVolumeSpecName: "kube-api-access-6ssv5") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "kube-api-access-6ssv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.797057 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844919 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844976 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844990 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107063 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" exitCode=0 Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107438 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd"} Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107511 4829 scope.go:117] "RemoveContainer" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107745 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.130430 4829 scope.go:117] "RemoveContainer" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.161668 4829 scope.go:117] "RemoveContainer" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.168088 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.185148 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.221765 4829 scope.go:117] "RemoveContainer" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.222294 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": container with ID starting with ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef not found: ID does not exist" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222348 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} err="failed to get container status \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": rpc error: code = NotFound desc = could not find container \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": container with ID starting with ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef not found: ID does not exist" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222384 4829 scope.go:117] "RemoveContainer" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.222820 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": container with ID starting with e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949 not found: ID does not exist" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222854 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} err="failed to get container status \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": rpc error: code = NotFound desc = could not find container \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": container with ID starting with e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949 not found: ID does not exist" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222875 4829 scope.go:117] "RemoveContainer" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.223135 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": container with ID starting with 9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e not found: ID does not exist" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.223166 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e"} err="failed to get container status \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": rpc error: code = NotFound desc = could not find container \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": container with ID starting with 9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e not found: ID does not exist" Feb 17 16:52:24 crc kubenswrapper[4829]: I0217 16:52:24.292809 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" path="/var/lib/kubelet/pods/60601378-20f1-4f29-a22b-0b6dfbc118a1/volumes" Feb 17 16:52:28 crc kubenswrapper[4829]: E0217 16:52:28.289766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:33 crc kubenswrapper[4829]: E0217 16:52:33.283793 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:41 crc kubenswrapper[4829]: E0217 16:52:41.282516 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:47 crc kubenswrapper[4829]: E0217 16:52:47.282445 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.475365 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476436 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-utilities" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476449 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-utilities" Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476471 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476478 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476501 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-content" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476509 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-content" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.478710 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.492226 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639315 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.742955 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743626 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743499 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.744079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.765263 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.798924 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:51 crc kubenswrapper[4829]: I0217 16:52:51.381196 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:51 crc kubenswrapper[4829]: I0217 16:52:51.402493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"c60a853849686ab53590661fb47e340fcb448a03febd0f524b02caaf02879b53"} Feb 17 16:52:52 crc kubenswrapper[4829]: I0217 16:52:52.422385 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" exitCode=0 Feb 17 16:52:52 crc kubenswrapper[4829]: I0217 16:52:52.422569 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9"} Feb 17 16:52:53 crc kubenswrapper[4829]: I0217 16:52:53.434553 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} Feb 17 16:52:56 crc kubenswrapper[4829]: E0217 16:52:56.296422 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:58 crc kubenswrapper[4829]: I0217 16:52:58.484423 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" exitCode=0 Feb 17 16:52:58 crc kubenswrapper[4829]: I0217 16:52:58.484480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} Feb 17 16:52:59 crc kubenswrapper[4829]: I0217 16:52:59.496563 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} Feb 17 16:52:59 crc kubenswrapper[4829]: I0217 16:52:59.516785 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9mgp" podStartSLOduration=3.001427853 podStartE2EDuration="9.516769054s" podCreationTimestamp="2026-02-17 16:52:50 +0000 UTC" firstStartedPulling="2026-02-17 16:52:52.428647578 +0000 UTC m=+3484.845665546" lastFinishedPulling="2026-02-17 16:52:58.943988769 +0000 UTC m=+3491.361006747" observedRunningTime="2026-02-17 16:52:59.514643086 +0000 UTC m=+3491.931661084" watchObservedRunningTime="2026-02-17 16:52:59.516769054 +0000 UTC m=+3491.933787032" Feb 17 16:53:00 crc kubenswrapper[4829]: E0217 16:53:00.281385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:00 crc kubenswrapper[4829]: I0217 16:53:00.800339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:00 crc kubenswrapper[4829]: I0217 16:53:00.803078 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:01 crc kubenswrapper[4829]: I0217 16:53:01.850951 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" probeResult="failure" output=< Feb 17 16:53:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:53:01 crc kubenswrapper[4829]: > Feb 17 16:53:09 crc kubenswrapper[4829]: E0217 16:53:09.285897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:11 crc kubenswrapper[4829]: I0217 16:53:11.845792 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" probeResult="failure" output=< Feb 17 16:53:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:53:11 crc kubenswrapper[4829]: > Feb 17 16:53:12 crc kubenswrapper[4829]: E0217 16:53:12.282870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:20 crc kubenswrapper[4829]: E0217 16:53:20.281774 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:20 crc kubenswrapper[4829]: I0217 16:53:20.852410 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:20 crc kubenswrapper[4829]: I0217 16:53:20.924327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:21 crc kubenswrapper[4829]: I0217 16:53:21.671852 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:22 crc kubenswrapper[4829]: I0217 16:53:22.746383 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" containerID="cri-o://8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" gracePeriod=2 Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.255957 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308244 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308508 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.315704 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities" (OuterVolumeSpecName: "utilities") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.329416 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd" (OuterVolumeSpecName: "kube-api-access-8zjqd") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "kube-api-access-8zjqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.411814 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.411847 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.454442 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.513396 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760345 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" exitCode=0 Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760404 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"c60a853849686ab53590661fb47e340fcb448a03febd0f524b02caaf02879b53"} Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760478 4829 scope.go:117] "RemoveContainer" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760517 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.784940 4829 scope.go:117] "RemoveContainer" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.825077 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.836100 4829 scope.go:117] "RemoveContainer" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.841836 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.880648 4829 scope.go:117] "RemoveContainer" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.881259 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": container with ID starting with 8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6 not found: ID does not exist" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881292 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} err="failed to get container status \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": rpc error: code = NotFound desc = could not find container \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": container with ID starting with 8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6 not found: ID does not exist" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881316 4829 scope.go:117] "RemoveContainer" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.881714 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": container with ID starting with b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d not found: ID does not exist" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881742 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} err="failed to get container status \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": rpc error: code = NotFound desc = could not find container \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": container with ID starting with b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d not found: ID does not exist" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881762 4829 scope.go:117] "RemoveContainer" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.882087 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": container with ID starting with 3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9 not found: ID does not exist" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.882113 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9"} err="failed to get container status \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": rpc error: code = NotFound desc = could not find container \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": container with ID starting with 3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9 not found: ID does not exist" Feb 17 16:53:24 crc kubenswrapper[4829]: E0217 16:53:24.281177 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.302317 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" path="/var/lib/kubelet/pods/999f5a65-e45a-4014-a208-9bfe09f453b3/volumes" Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.773992 4829 generic.go:334] "Generic (PLEG): container finished" podID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerID="2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946" exitCode=2 Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.774104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerDied","Data":"2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946"} Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.259106 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.403793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.404725 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.404953 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.412801 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq" (OuterVolumeSpecName: "kube-api-access-kshsq") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "kube-api-access-kshsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.435912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.436249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory" (OuterVolumeSpecName: "inventory") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509539 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509597 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509610 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerDied","Data":"98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083"} Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796851 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796592 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.294132 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418360 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418452 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418747 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.420059 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:46 crc kubenswrapper[4829]: E0217 16:53:46.284492 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.421320 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.421973 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.422175 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.423444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:52 crc kubenswrapper[4829]: I0217 16:53:52.424787 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:53:52 crc kubenswrapper[4829]: I0217 16:53:52.425410 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:53:57 crc kubenswrapper[4829]: E0217 16:53:57.285321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:03 crc kubenswrapper[4829]: E0217 16:54:03.281546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:10 crc kubenswrapper[4829]: E0217 16:54:10.283444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:15 crc kubenswrapper[4829]: E0217 16:54:15.281997 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:22 crc kubenswrapper[4829]: I0217 16:54:22.424878 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:22 crc kubenswrapper[4829]: I0217 16:54:22.425417 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:24 crc kubenswrapper[4829]: E0217 16:54:24.283261 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:27 crc kubenswrapper[4829]: E0217 16:54:27.282406 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:37 crc kubenswrapper[4829]: E0217 16:54:37.281165 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:38 crc kubenswrapper[4829]: E0217 16:54:38.289625 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.036772 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037852 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037872 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037916 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-content" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037923 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-content" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037936 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-utilities" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037943 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-utilities" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037974 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037979 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.038181 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.038198 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.039074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042071 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042135 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042179 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.043393 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.055126 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.182781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.182921 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.183020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.286534 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.287066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.287330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.295911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.300139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.310913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.363691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.973566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:45 crc kubenswrapper[4829]: I0217 16:54:45.700835 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerStarted","Data":"d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560"} Feb 17 16:54:46 crc kubenswrapper[4829]: I0217 16:54:46.712160 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerStarted","Data":"f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187"} Feb 17 16:54:49 crc kubenswrapper[4829]: E0217 16:54:49.288664 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.281459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.424738 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.425076 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.425125 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.426152 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.426243 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" gracePeriod=600 Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.583721 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772122 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" exitCode=0 Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772234 4829 scope.go:117] "RemoveContainer" containerID="a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.773369 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.773903 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.799463 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" podStartSLOduration=8.197973639 podStartE2EDuration="8.799442291s" podCreationTimestamp="2026-02-17 16:54:44 +0000 UTC" firstStartedPulling="2026-02-17 16:54:44.987612334 +0000 UTC m=+3597.404630312" lastFinishedPulling="2026-02-17 16:54:45.589080986 +0000 UTC m=+3598.006098964" observedRunningTime="2026-02-17 16:54:46.751679347 +0000 UTC m=+3599.168697335" watchObservedRunningTime="2026-02-17 16:54:52.799442291 +0000 UTC m=+3605.216460269" Feb 17 16:55:04 crc kubenswrapper[4829]: E0217 16:55:04.281520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:05 crc kubenswrapper[4829]: I0217 16:55:05.280318 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:05 crc kubenswrapper[4829]: E0217 16:55:05.280922 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:05 crc kubenswrapper[4829]: E0217 16:55:05.283381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:15 crc kubenswrapper[4829]: E0217 16:55:15.281612 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:17 crc kubenswrapper[4829]: E0217 16:55:17.283308 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:18 crc kubenswrapper[4829]: I0217 16:55:18.291483 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:18 crc kubenswrapper[4829]: E0217 16:55:18.292000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:26 crc kubenswrapper[4829]: E0217 16:55:26.283293 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:32 crc kubenswrapper[4829]: E0217 16:55:32.281859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:33 crc kubenswrapper[4829]: I0217 16:55:33.280223 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:33 crc kubenswrapper[4829]: E0217 16:55:33.280866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:39 crc kubenswrapper[4829]: E0217 16:55:39.281850 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:45 crc kubenswrapper[4829]: I0217 16:55:45.280063 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:45 crc kubenswrapper[4829]: E0217 16:55:45.281222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:45 crc kubenswrapper[4829]: E0217 16:55:45.281613 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:54 crc kubenswrapper[4829]: E0217 16:55:54.281526 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:57 crc kubenswrapper[4829]: I0217 16:55:57.280068 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:57 crc kubenswrapper[4829]: E0217 16:55:57.280987 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:57 crc kubenswrapper[4829]: E0217 16:55:57.283759 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.289427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.289520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:08 crc kubenswrapper[4829]: I0217 16:56:08.288667 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.291710 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:20 crc kubenswrapper[4829]: E0217 16:56:20.281524 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:21 crc kubenswrapper[4829]: E0217 16:56:21.281884 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:23 crc kubenswrapper[4829]: I0217 16:56:23.279656 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:23 crc kubenswrapper[4829]: E0217 16:56:23.280285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:34 crc kubenswrapper[4829]: I0217 16:56:34.280293 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:34 crc kubenswrapper[4829]: E0217 16:56:34.281170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:35 crc kubenswrapper[4829]: E0217 16:56:35.282328 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:36 crc kubenswrapper[4829]: E0217 16:56:36.281909 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:47 crc kubenswrapper[4829]: I0217 16:56:47.280119 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:47 crc kubenswrapper[4829]: E0217 16:56:47.281141 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:47 crc kubenswrapper[4829]: E0217 16:56:47.283072 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:51 crc kubenswrapper[4829]: E0217 16:56:51.282681 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:58 crc kubenswrapper[4829]: I0217 16:56:58.282769 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:58 crc kubenswrapper[4829]: E0217 16:56:58.284175 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:00 crc kubenswrapper[4829]: E0217 16:57:00.281719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:03 crc kubenswrapper[4829]: E0217 16:57:03.281918 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:11 crc kubenswrapper[4829]: E0217 16:57:11.282381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:13 crc kubenswrapper[4829]: I0217 16:57:13.279680 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:13 crc kubenswrapper[4829]: E0217 16:57:13.280263 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:15 crc kubenswrapper[4829]: E0217 16:57:15.283978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:23 crc kubenswrapper[4829]: E0217 16:57:23.281432 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:27 crc kubenswrapper[4829]: I0217 16:57:27.280524 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:27 crc kubenswrapper[4829]: E0217 16:57:27.281424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:29 crc kubenswrapper[4829]: E0217 16:57:29.294416 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:38 crc kubenswrapper[4829]: E0217 16:57:38.291040 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:40 crc kubenswrapper[4829]: I0217 16:57:40.280309 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:40 crc kubenswrapper[4829]: E0217 16:57:40.280897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:40 crc kubenswrapper[4829]: E0217 16:57:40.283130 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:50 crc kubenswrapper[4829]: E0217 16:57:50.285374 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:54 crc kubenswrapper[4829]: E0217 16:57:54.281859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:55 crc kubenswrapper[4829]: I0217 16:57:55.280199 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:55 crc kubenswrapper[4829]: E0217 16:57:55.280608 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:05 crc kubenswrapper[4829]: E0217 16:58:05.282450 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:07 crc kubenswrapper[4829]: I0217 16:58:07.279874 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:07 crc kubenswrapper[4829]: E0217 16:58:07.280763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:09 crc kubenswrapper[4829]: E0217 16:58:09.287019 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:19 crc kubenswrapper[4829]: E0217 16:58:19.282154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:22 crc kubenswrapper[4829]: I0217 16:58:22.280441 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:22 crc kubenswrapper[4829]: E0217 16:58:22.281295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:23 crc kubenswrapper[4829]: E0217 16:58:23.281506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:34 crc kubenswrapper[4829]: I0217 16:58:34.281146 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.282079 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.283038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.283042 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:45 crc kubenswrapper[4829]: I0217 16:58:45.283559 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406004 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406077 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406222 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.407444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:47 crc kubenswrapper[4829]: E0217 16:58:47.280932 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:49 crc kubenswrapper[4829]: I0217 16:58:49.279410 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:49 crc kubenswrapper[4829]: E0217 16:58:49.280351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:00 crc kubenswrapper[4829]: E0217 16:59:00.281629 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:01 crc kubenswrapper[4829]: I0217 16:59:01.279867 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.280450 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.403765 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.403860 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.404030 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.405829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:13 crc kubenswrapper[4829]: I0217 16:59:13.279969 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:13 crc kubenswrapper[4829]: E0217 16:59:13.280889 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:15 crc kubenswrapper[4829]: E0217 16:59:15.282770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:17 crc kubenswrapper[4829]: E0217 16:59:17.285806 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:25 crc kubenswrapper[4829]: I0217 16:59:25.280506 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:25 crc kubenswrapper[4829]: E0217 16:59:25.283554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:27 crc kubenswrapper[4829]: E0217 16:59:27.282505 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:31 crc kubenswrapper[4829]: E0217 16:59:31.283380 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:40 crc kubenswrapper[4829]: I0217 16:59:40.280056 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:40 crc kubenswrapper[4829]: E0217 16:59:40.281119 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:42 crc kubenswrapper[4829]: E0217 16:59:42.282334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:42 crc kubenswrapper[4829]: E0217 16:59:42.282340 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:51 crc kubenswrapper[4829]: I0217 16:59:51.280256 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:51 crc kubenswrapper[4829]: E0217 16:59:51.281095 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:56 crc kubenswrapper[4829]: E0217 16:59:56.281466 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:56 crc kubenswrapper[4829]: E0217 16:59:56.281702 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.173687 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.188668 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.192413 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.211704 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.212358 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.275887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.276322 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.276429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378606 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378727 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378960 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.380001 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.385209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.396437 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.544454 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:01 crc kubenswrapper[4829]: I0217 17:00:01.721965 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.444822 4829 generic.go:334] "Generic (PLEG): container finished" podID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerID="f4cc6704b8d4cbb9f1474dc2f06edf348ff52dc93162fe645a65a1daf1e5eefe" exitCode=0 Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.444954 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerDied","Data":"f4cc6704b8d4cbb9f1474dc2f06edf348ff52dc93162fe645a65a1daf1e5eefe"} Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.445484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerStarted","Data":"13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef"} Feb 17 17:00:03 crc kubenswrapper[4829]: I0217 17:00:03.279539 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.018994 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069324 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069554 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069788 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.077239 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.078524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.100267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65" (OuterVolumeSpecName: "kube-api-access-kqt65") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "kube-api-access-kqt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172540 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172607 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172623 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerDied","Data":"13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef"} Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470751 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470480 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.476345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} Feb 17 17:00:05 crc kubenswrapper[4829]: I0217 17:00:05.121490 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 17:00:05 crc kubenswrapper[4829]: I0217 17:00:05.133547 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 17:00:06 crc kubenswrapper[4829]: I0217 17:00:06.294600 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" path="/var/lib/kubelet/pods/b88fd8a6-9c2a-4529-81eb-5495aa3237c8/volumes" Feb 17 17:00:09 crc kubenswrapper[4829]: E0217 17:00:09.285000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:10 crc kubenswrapper[4829]: E0217 17:00:10.282563 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:19 crc kubenswrapper[4829]: I0217 17:00:19.331423 4829 scope.go:117] "RemoveContainer" containerID="595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c" Feb 17 17:00:21 crc kubenswrapper[4829]: E0217 17:00:21.281816 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:23 crc kubenswrapper[4829]: E0217 17:00:23.281709 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:35 crc kubenswrapper[4829]: E0217 17:00:35.282218 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:36 crc kubenswrapper[4829]: E0217 17:00:36.282950 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:48 crc kubenswrapper[4829]: E0217 17:00:48.290933 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:48 crc kubenswrapper[4829]: E0217 17:00:48.290970 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:59 crc kubenswrapper[4829]: E0217 17:00:59.281739 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.040729 4829 generic.go:334] "Generic (PLEG): container finished" podID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerID="f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187" exitCode=2 Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.040862 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerDied","Data":"f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187"} Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.152598 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:00 crc kubenswrapper[4829]: E0217 17:01:00.153164 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.153183 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.153458 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.154476 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.166801 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167620 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167748 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167774 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270174 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270211 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.277471 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.283731 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.283838 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.288556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.488153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.984129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:01 crc kubenswrapper[4829]: I0217 17:01:01.054746 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerStarted","Data":"25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.046194 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerDied","Data":"d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092442 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092612 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.099684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerStarted","Data":"5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.191794 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-jp96w" podStartSLOduration=2.191773867 podStartE2EDuration="2.191773867s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:02.125297777 +0000 UTC m=+3974.542315775" watchObservedRunningTime="2026-02-17 17:01:02.191773867 +0000 UTC m=+3974.608791845" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.228925 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.232746 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.232857 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.246341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c" (OuterVolumeSpecName: "kube-api-access-wln6c") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "kube-api-access-wln6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.275205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: E0217 17:01:02.281877 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.335906 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.335949 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.367611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory" (OuterVolumeSpecName: "inventory") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.437162 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:06 crc kubenswrapper[4829]: I0217 17:01:06.151275 4829 generic.go:334] "Generic (PLEG): container finished" podID="7522621b-701f-4bef-8232-25fb5b8abab1" containerID="5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227" exitCode=0 Feb 17 17:01:06 crc kubenswrapper[4829]: I0217 17:01:06.151329 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerDied","Data":"5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227"} Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.747157 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876465 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876817 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.881967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx" (OuterVolumeSpecName: "kube-api-access-fmxhx") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "kube-api-access-fmxhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.882417 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.913932 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.957688 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data" (OuterVolumeSpecName: "config-data") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980125 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980169 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980187 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980201 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerDied","Data":"25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48"} Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178112 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48" Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178126 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:13 crc kubenswrapper[4829]: E0217 17:01:13.281446 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:14 crc kubenswrapper[4829]: E0217 17:01:14.281349 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:25 crc kubenswrapper[4829]: E0217 17:01:25.281563 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:28 crc kubenswrapper[4829]: E0217 17:01:28.301984 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:36 crc kubenswrapper[4829]: E0217 17:01:36.281721 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:40 crc kubenswrapper[4829]: E0217 17:01:40.282270 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:50 crc kubenswrapper[4829]: E0217 17:01:50.282953 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:52 crc kubenswrapper[4829]: E0217 17:01:52.281729 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:04 crc kubenswrapper[4829]: E0217 17:02:04.281456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:06 crc kubenswrapper[4829]: E0217 17:02:06.281284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:18 crc kubenswrapper[4829]: E0217 17:02:18.289841 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:21 crc kubenswrapper[4829]: E0217 17:02:21.282771 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:22 crc kubenswrapper[4829]: I0217 17:02:22.424183 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:22 crc kubenswrapper[4829]: I0217 17:02:22.424526 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:30 crc kubenswrapper[4829]: E0217 17:02:30.281867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:32 crc kubenswrapper[4829]: E0217 17:02:32.281827 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:41 crc kubenswrapper[4829]: E0217 17:02:41.282289 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.718400 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:46 crc kubenswrapper[4829]: E0217 17:02:46.719649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719670 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: E0217 17:02:46.719695 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719703 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719998 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.720028 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.722143 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.733443 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871196 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871380 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.922519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.925405 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.953023 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974079 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974154 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974416 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974955 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.975079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.998641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.052932 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076272 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.178770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179131 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.208133 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.243839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: E0217 17:02:47.293653 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.684517 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.877387 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:48 crc kubenswrapper[4829]: E0217 17:02:48.105904 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8d01ff_56bf_4c0c_b23a_f1d39897a1e1.slice/crio-c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:02:48 crc kubenswrapper[4829]: E0217 17:02:48.119162 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8d01ff_56bf_4c0c_b23a_f1d39897a1e1.slice/crio-c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.569735 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" exitCode=0 Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.570087 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.570112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"77f0002caeed3f047c6b9dac29f1d93c8de39b8b4df63faa2366affd8529c82d"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574357 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b" exitCode=0 Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574395 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574418 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"a1fc745f1370e4a89f0f709e3665185a42b6cc92ee32738d4cc7001b5ecbd3de"} Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.588702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7"} Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.921135 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.925693 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.932786 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073256 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073333 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073404 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175490 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175557 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175617 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.176086 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.176129 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.199312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.253911 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.603681 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.817945 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:50 crc kubenswrapper[4829]: W0217 17:02:50.824834 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4f1019_63ed_4b36_93b0_5cb66837ec84.slice/crio-bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569 WatchSource:0}: Error finding container bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569: Status 404 returned error can't find the container with id bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569 Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.620634 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" exitCode=0 Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.620819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810"} Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.621232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569"} Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.425160 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.425214 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.635513 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" exitCode=0 Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.635604 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} Feb 17 17:02:53 crc kubenswrapper[4829]: I0217 17:02:53.656294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} Feb 17 17:02:54 crc kubenswrapper[4829]: E0217 17:02:54.280497 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.667310 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7" exitCode=0 Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.667367 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7"} Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.671918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.711027 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8ngc2" podStartSLOduration=4.239719105 podStartE2EDuration="8.711000223s" podCreationTimestamp="2026-02-17 17:02:46 +0000 UTC" firstStartedPulling="2026-02-17 17:02:48.572266701 +0000 UTC m=+4080.989284679" lastFinishedPulling="2026-02-17 17:02:53.043547819 +0000 UTC m=+4085.460565797" observedRunningTime="2026-02-17 17:02:54.70388226 +0000 UTC m=+4087.120900258" watchObservedRunningTime="2026-02-17 17:02:54.711000223 +0000 UTC m=+4087.128018221" Feb 17 17:02:55 crc kubenswrapper[4829]: I0217 17:02:55.683857 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" exitCode=0 Feb 17 17:02:55 crc kubenswrapper[4829]: I0217 17:02:55.683905 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.699328 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.703387 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.730085 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdlcf" podStartSLOduration=4.131058829 podStartE2EDuration="10.730065865s" podCreationTimestamp="2026-02-17 17:02:46 +0000 UTC" firstStartedPulling="2026-02-17 17:02:48.576226618 +0000 UTC m=+4080.993244596" lastFinishedPulling="2026-02-17 17:02:55.175233654 +0000 UTC m=+4087.592251632" observedRunningTime="2026-02-17 17:02:56.725893893 +0000 UTC m=+4089.142911871" watchObservedRunningTime="2026-02-17 17:02:56.730065865 +0000 UTC m=+4089.147083843" Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.750788 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ppp9d" podStartSLOduration=3.210497345 podStartE2EDuration="7.750771812s" podCreationTimestamp="2026-02-17 17:02:49 +0000 UTC" firstStartedPulling="2026-02-17 17:02:51.623468657 +0000 UTC m=+4084.040486635" lastFinishedPulling="2026-02-17 17:02:56.163743124 +0000 UTC m=+4088.580761102" observedRunningTime="2026-02-17 17:02:56.746538708 +0000 UTC m=+4089.163556686" watchObservedRunningTime="2026-02-17 17:02:56.750771812 +0000 UTC m=+4089.167789790" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.053833 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.053893 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.245520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.245904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:58 crc kubenswrapper[4829]: I0217 17:02:58.106873 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:02:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:02:58 crc kubenswrapper[4829]: > Feb 17 17:02:58 crc kubenswrapper[4829]: I0217 17:02:58.304458 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8ngc2" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" probeResult="failure" output=< Feb 17 17:02:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:02:58 crc kubenswrapper[4829]: > Feb 17 17:03:00 crc kubenswrapper[4829]: I0217 17:03:00.254443 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:00 crc kubenswrapper[4829]: I0217 17:03:00.255774 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:01 crc kubenswrapper[4829]: E0217 17:03:01.281127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:01 crc kubenswrapper[4829]: I0217 17:03:01.306898 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:01 crc kubenswrapper[4829]: > Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.534519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.537797 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.561711 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727128 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727343 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727403 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829221 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829377 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.830248 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.830342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.850702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.919498 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:03 crc kubenswrapper[4829]: I0217 17:03:03.512349 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:03 crc kubenswrapper[4829]: I0217 17:03:03.769982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"b59ad539cf0ce290be53944b90ddaf1e58595f42c17f1d94728410f8fddfbe67"} Feb 17 17:03:04 crc kubenswrapper[4829]: I0217 17:03:04.782239 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" exitCode=0 Feb 17 17:03:04 crc kubenswrapper[4829]: I0217 17:03:04.782340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc"} Feb 17 17:03:05 crc kubenswrapper[4829]: I0217 17:03:05.795191 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} Feb 17 17:03:06 crc kubenswrapper[4829]: E0217 17:03:06.283600 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:07 crc kubenswrapper[4829]: I0217 17:03:07.331514 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:07 crc kubenswrapper[4829]: I0217 17:03:07.379708 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.322991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.328609 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:08 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:08 crc kubenswrapper[4829]: > Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.823471 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8ngc2" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" containerID="cri-o://ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" gracePeriod=2 Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.756201 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.834313 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.835189 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.835361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.836106 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities" (OuterVolumeSpecName: "utilities") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.836954 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845097 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" exitCode=0 Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845227 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"77f0002caeed3f047c6b9dac29f1d93c8de39b8b4df63faa2366affd8529c82d"} Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845251 4829 scope.go:117] "RemoveContainer" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845455 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.856658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh" (OuterVolumeSpecName: "kube-api-access-pmlnh") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "kube-api-access-pmlnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.871024 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.936123 4829 scope.go:117] "RemoveContainer" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.939650 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.939694 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.963965 4829 scope.go:117] "RemoveContainer" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051038 4829 scope.go:117] "RemoveContainer" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.051606 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": container with ID starting with ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d not found: ID does not exist" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051669 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} err="failed to get container status \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": rpc error: code = NotFound desc = could not find container \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": container with ID starting with ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051706 4829 scope.go:117] "RemoveContainer" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.052408 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": container with ID starting with c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4 not found: ID does not exist" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052441 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} err="failed to get container status \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": rpc error: code = NotFound desc = could not find container \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": container with ID starting with c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4 not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052479 4829 scope.go:117] "RemoveContainer" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.052776 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": container with ID starting with c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b not found: ID does not exist" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052830 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b"} err="failed to get container status \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": rpc error: code = NotFound desc = could not find container \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": container with ID starting with c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.192550 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.207320 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.296096 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" path="/var/lib/kubelet/pods/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1/volumes" Feb 17 17:03:11 crc kubenswrapper[4829]: I0217 17:03:11.315087 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:11 crc kubenswrapper[4829]: > Feb 17 17:03:12 crc kubenswrapper[4829]: E0217 17:03:12.282713 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:14 crc kubenswrapper[4829]: I0217 17:03:14.900139 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" exitCode=0 Feb 17 17:03:14 crc kubenswrapper[4829]: I0217 17:03:14.900198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} Feb 17 17:03:16 crc kubenswrapper[4829]: I0217 17:03:16.923286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} Feb 17 17:03:16 crc kubenswrapper[4829]: I0217 17:03:16.942433 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x86t" podStartSLOduration=4.391819029 podStartE2EDuration="14.94241325s" podCreationTimestamp="2026-02-17 17:03:02 +0000 UTC" firstStartedPulling="2026-02-17 17:03:04.78766408 +0000 UTC m=+4097.204682058" lastFinishedPulling="2026-02-17 17:03:15.338258301 +0000 UTC m=+4107.755276279" observedRunningTime="2026-02-17 17:03:16.939340656 +0000 UTC m=+4109.356358644" watchObservedRunningTime="2026-02-17 17:03:16.94241325 +0000 UTC m=+4109.359431228" Feb 17 17:03:17 crc kubenswrapper[4829]: E0217 17:03:17.282608 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:18 crc kubenswrapper[4829]: I0217 17:03:18.105334 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:18 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:18 crc kubenswrapper[4829]: > Feb 17 17:03:20 crc kubenswrapper[4829]: I0217 17:03:20.308249 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:20 crc kubenswrapper[4829]: I0217 17:03:20.359342 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:21 crc kubenswrapper[4829]: I0217 17:03:21.125240 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.020257 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" containerID="cri-o://55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" gracePeriod=2 Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424729 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424778 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424819 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.425648 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.425701 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" gracePeriod=600 Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.698534 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.791919 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792018 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792133 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792692 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities" (OuterVolumeSpecName: "utilities") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.793015 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.805042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm" (OuterVolumeSpecName: "kube-api-access-cz8lm") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "kube-api-access-cz8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.861181 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.895740 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.895783 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.921351 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.921407 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.973022 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032324 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" exitCode=0 Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032453 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032472 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036093 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" exitCode=0 Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036774 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.037110 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.086355 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.097535 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.100205 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.127492 4829 scope.go:117] "RemoveContainer" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.144842 4829 scope.go:117] "RemoveContainer" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.171763 4829 scope.go:117] "RemoveContainer" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.236913 4829 scope.go:117] "RemoveContainer" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.237875 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": container with ID starting with 55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580 not found: ID does not exist" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.237961 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} err="failed to get container status \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": rpc error: code = NotFound desc = could not find container \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": container with ID starting with 55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580 not found: ID does not exist" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238001 4829 scope.go:117] "RemoveContainer" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.238493 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": container with ID starting with fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1 not found: ID does not exist" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238565 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} err="failed to get container status \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": rpc error: code = NotFound desc = could not find container \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": container with ID starting with fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1 not found: ID does not exist" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238639 4829 scope.go:117] "RemoveContainer" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.239186 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": container with ID starting with 38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810 not found: ID does not exist" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.239224 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810"} err="failed to get container status \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": rpc error: code = NotFound desc = could not find container \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": container with ID starting with 38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810 not found: ID does not exist" Feb 17 17:03:24 crc kubenswrapper[4829]: E0217 17:03:24.283945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:24 crc kubenswrapper[4829]: I0217 17:03:24.295240 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" path="/var/lib/kubelet/pods/2b4f1019-63ed-4b36-93b0-5cb66837ec84/volumes" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.324779 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.326125 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x86t" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" containerID="cri-o://1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" gracePeriod=2 Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.926889 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971467 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971782 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.972610 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities" (OuterVolumeSpecName: "utilities") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.980177 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7" (OuterVolumeSpecName: "kube-api-access-6vgw7") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "kube-api-access-6vgw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084771 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085061 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084895 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085265 4829 scope.go:117] "RemoveContainer" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084853 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" exitCode=0 Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"b59ad539cf0ce290be53944b90ddaf1e58595f42c17f1d94728410f8fddfbe67"} Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085016 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.106980 4829 scope.go:117] "RemoveContainer" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.111630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.130658 4829 scope.go:117] "RemoveContainer" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.188050 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.227805 4829 scope.go:117] "RemoveContainer" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.228743 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": container with ID starting with 1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad not found: ID does not exist" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.228807 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} err="failed to get container status \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": rpc error: code = NotFound desc = could not find container \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": container with ID starting with 1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.228838 4829 scope.go:117] "RemoveContainer" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.230621 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": container with ID starting with 5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723 not found: ID does not exist" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.230649 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} err="failed to get container status \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": rpc error: code = NotFound desc = could not find container \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": container with ID starting with 5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723 not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.230666 4829 scope.go:117] "RemoveContainer" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.231090 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": container with ID starting with 345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc not found: ID does not exist" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.231124 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc"} err="failed to get container status \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": rpc error: code = NotFound desc = could not find container \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": container with ID starting with 345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.419477 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.430535 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:27 crc kubenswrapper[4829]: I0217 17:03:27.890345 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:27 crc kubenswrapper[4829]: I0217 17:03:27.991537 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:28 crc kubenswrapper[4829]: E0217 17:03:28.289476 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:28 crc kubenswrapper[4829]: I0217 17:03:28.291407 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" path="/var/lib/kubelet/pods/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8/volumes" Feb 17 17:03:28 crc kubenswrapper[4829]: I0217 17:03:28.725395 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:29 crc kubenswrapper[4829]: I0217 17:03:29.117001 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" containerID="cri-o://5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" gracePeriod=2 Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.128458 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" exitCode=0 Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.128538 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842"} Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.659899 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.838706 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839728 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities" (OuterVolumeSpecName: "utilities") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.841269 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.871423 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn" (OuterVolumeSpecName: "kube-api-access-brsxn") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "kube-api-access-brsxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.922864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.943897 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.944007 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146459 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"a1fc745f1370e4a89f0f709e3665185a42b6cc92ee32738d4cc7001b5ecbd3de"} Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146548 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146600 4829 scope.go:117] "RemoveContainer" containerID="5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.182475 4829 scope.go:117] "RemoveContainer" containerID="439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.182677 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.192858 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.206136 4829 scope.go:117] "RemoveContainer" containerID="bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b" Feb 17 17:03:32 crc kubenswrapper[4829]: I0217 17:03:32.291418 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece55ca0-c061-44d8-abde-b99f48421919" path="/var/lib/kubelet/pods/ece55ca0-c061-44d8-abde-b99f48421919/volumes" Feb 17 17:03:35 crc kubenswrapper[4829]: E0217 17:03:35.282232 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.033811 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.034936 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.034953 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.034976 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.034983 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035009 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035017 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035029 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035038 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035053 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035061 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035086 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035095 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035112 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035119 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035128 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035136 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035147 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035155 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035178 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035184 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035196 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035203 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035213 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035219 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035486 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035509 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035527 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.036518 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042057 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042347 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042965 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.043120 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.047096 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.155651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.156303 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.156864 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259044 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259127 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.347338 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.348487 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.358216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.364706 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:40 crc kubenswrapper[4829]: E0217 17:03:40.282274 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:40 crc kubenswrapper[4829]: I0217 17:03:40.316794 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:41 crc kubenswrapper[4829]: I0217 17:03:41.267203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerStarted","Data":"c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2"} Feb 17 17:03:42 crc kubenswrapper[4829]: I0217 17:03:42.295238 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerStarted","Data":"564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393"} Feb 17 17:03:42 crc kubenswrapper[4829]: I0217 17:03:42.317779 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" podStartSLOduration=1.892221039 podStartE2EDuration="3.317754919s" podCreationTimestamp="2026-02-17 17:03:39 +0000 UTC" firstStartedPulling="2026-02-17 17:03:40.330297637 +0000 UTC m=+4132.747315615" lastFinishedPulling="2026-02-17 17:03:41.755831517 +0000 UTC m=+4134.172849495" observedRunningTime="2026-02-17 17:03:42.302755765 +0000 UTC m=+4134.719773743" watchObservedRunningTime="2026-02-17 17:03:42.317754919 +0000 UTC m=+4134.734772907" Feb 17 17:03:46 crc kubenswrapper[4829]: E0217 17:03:46.283009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:51 crc kubenswrapper[4829]: I0217 17:03:51.283106 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433049 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433122 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433287 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.434500 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:58 crc kubenswrapper[4829]: E0217 17:03:58.297222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:06 crc kubenswrapper[4829]: E0217 17:04:06.282295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.382098 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.382949 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.383156 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.384779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:21 crc kubenswrapper[4829]: E0217 17:04:21.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:25 crc kubenswrapper[4829]: E0217 17:04:25.282216 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:34 crc kubenswrapper[4829]: E0217 17:04:34.282475 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:39 crc kubenswrapper[4829]: E0217 17:04:39.282978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:47 crc kubenswrapper[4829]: E0217 17:04:47.281633 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:51 crc kubenswrapper[4829]: E0217 17:04:51.281564 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:02 crc kubenswrapper[4829]: E0217 17:05:02.282644 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:05 crc kubenswrapper[4829]: E0217 17:05:05.281624 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:13 crc kubenswrapper[4829]: E0217 17:05:13.281124 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:17 crc kubenswrapper[4829]: E0217 17:05:17.282981 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:22 crc kubenswrapper[4829]: I0217 17:05:22.424485 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:05:22 crc kubenswrapper[4829]: I0217 17:05:22.425191 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:05:28 crc kubenswrapper[4829]: E0217 17:05:28.281012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:30 crc kubenswrapper[4829]: E0217 17:05:30.282471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:43 crc kubenswrapper[4829]: E0217 17:05:43.281814 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:43 crc kubenswrapper[4829]: E0217 17:05:43.282506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:52 crc kubenswrapper[4829]: I0217 17:05:52.424663 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:05:52 crc kubenswrapper[4829]: I0217 17:05:52.425267 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:05:55 crc kubenswrapper[4829]: E0217 17:05:55.281529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:55 crc kubenswrapper[4829]: E0217 17:05:55.281560 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:06 crc kubenswrapper[4829]: E0217 17:06:06.283194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:07 crc kubenswrapper[4829]: E0217 17:06:07.281296 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:17 crc kubenswrapper[4829]: E0217 17:06:17.281495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:19 crc kubenswrapper[4829]: E0217 17:06:19.285038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.424471 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.424987 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425028 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425598 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425655 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" gracePeriod=600 Feb 17 17:06:22 crc kubenswrapper[4829]: E0217 17:06:22.547366 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.310926 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" exitCode=0 Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.311251 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.311284 4829 scope.go:117] "RemoveContainer" containerID="8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.312272 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:23 crc kubenswrapper[4829]: E0217 17:06:23.312734 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:28 crc kubenswrapper[4829]: E0217 17:06:28.287918 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:34 crc kubenswrapper[4829]: I0217 17:06:34.280569 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:34 crc kubenswrapper[4829]: E0217 17:06:34.281482 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:34 crc kubenswrapper[4829]: E0217 17:06:34.282277 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:40 crc kubenswrapper[4829]: E0217 17:06:40.282701 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:45 crc kubenswrapper[4829]: E0217 17:06:45.281865 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:47 crc kubenswrapper[4829]: I0217 17:06:47.279949 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:47 crc kubenswrapper[4829]: E0217 17:06:47.280568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:54 crc kubenswrapper[4829]: E0217 17:06:54.281480 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:59 crc kubenswrapper[4829]: E0217 17:06:59.281960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:01 crc kubenswrapper[4829]: I0217 17:07:01.279325 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:01 crc kubenswrapper[4829]: E0217 17:07:01.279900 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:06 crc kubenswrapper[4829]: E0217 17:07:06.283035 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:11 crc kubenswrapper[4829]: E0217 17:07:11.281253 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:12 crc kubenswrapper[4829]: I0217 17:07:12.279746 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:12 crc kubenswrapper[4829]: E0217 17:07:12.280211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:19 crc kubenswrapper[4829]: E0217 17:07:19.285959 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:22 crc kubenswrapper[4829]: E0217 17:07:22.281395 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:24 crc kubenswrapper[4829]: I0217 17:07:24.280044 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:24 crc kubenswrapper[4829]: E0217 17:07:24.280656 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:30 crc kubenswrapper[4829]: E0217 17:07:30.281643 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:36 crc kubenswrapper[4829]: E0217 17:07:36.282418 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:37 crc kubenswrapper[4829]: I0217 17:07:37.279008 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:37 crc kubenswrapper[4829]: E0217 17:07:37.279949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:42 crc kubenswrapper[4829]: E0217 17:07:42.283363 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:50 crc kubenswrapper[4829]: E0217 17:07:50.281351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:51 crc kubenswrapper[4829]: I0217 17:07:51.278996 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:51 crc kubenswrapper[4829]: E0217 17:07:51.279397 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:53 crc kubenswrapper[4829]: E0217 17:07:53.281229 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:01 crc kubenswrapper[4829]: E0217 17:08:01.281077 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:03 crc kubenswrapper[4829]: I0217 17:08:03.279468 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:03 crc kubenswrapper[4829]: E0217 17:08:03.280357 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:07 crc kubenswrapper[4829]: E0217 17:08:07.281410 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:12 crc kubenswrapper[4829]: E0217 17:08:12.281284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:18 crc kubenswrapper[4829]: I0217 17:08:18.287368 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:18 crc kubenswrapper[4829]: E0217 17:08:18.288244 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:20 crc kubenswrapper[4829]: E0217 17:08:20.281226 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:25 crc kubenswrapper[4829]: E0217 17:08:25.281335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:30 crc kubenswrapper[4829]: I0217 17:08:30.279120 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:30 crc kubenswrapper[4829]: E0217 17:08:30.279996 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:33 crc kubenswrapper[4829]: E0217 17:08:33.281770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:36 crc kubenswrapper[4829]: E0217 17:08:36.285071 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:45 crc kubenswrapper[4829]: I0217 17:08:45.280067 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:45 crc kubenswrapper[4829]: E0217 17:08:45.281024 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:47 crc kubenswrapper[4829]: E0217 17:08:47.281365 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:50 crc kubenswrapper[4829]: E0217 17:08:50.284436 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:00 crc kubenswrapper[4829]: I0217 17:09:00.280034 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:00 crc kubenswrapper[4829]: E0217 17:09:00.280861 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:01 crc kubenswrapper[4829]: E0217 17:09:01.281521 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:02 crc kubenswrapper[4829]: I0217 17:09:02.280737 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413048 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413150 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413358 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.414641 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:11 crc kubenswrapper[4829]: I0217 17:09:11.279720 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:11 crc kubenswrapper[4829]: E0217 17:09:11.281200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398587 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398648 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398765 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.399959 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:13 crc kubenswrapper[4829]: E0217 17:09:13.281177 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:24 crc kubenswrapper[4829]: E0217 17:09:24.284385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:24 crc kubenswrapper[4829]: E0217 17:09:24.284462 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:25 crc kubenswrapper[4829]: I0217 17:09:25.279601 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:25 crc kubenswrapper[4829]: E0217 17:09:25.280032 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:37 crc kubenswrapper[4829]: E0217 17:09:37.281464 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:38 crc kubenswrapper[4829]: E0217 17:09:38.298991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:40 crc kubenswrapper[4829]: I0217 17:09:40.279466 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:40 crc kubenswrapper[4829]: E0217 17:09:40.280459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:52 crc kubenswrapper[4829]: E0217 17:09:52.282305 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:52 crc kubenswrapper[4829]: E0217 17:09:52.283208 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:54 crc kubenswrapper[4829]: I0217 17:09:54.551015 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerID="564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393" exitCode=2 Feb 17 17:09:54 crc kubenswrapper[4829]: I0217 17:09:54.551105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerDied","Data":"564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393"} Feb 17 17:09:55 crc kubenswrapper[4829]: I0217 17:09:55.279398 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:55 crc kubenswrapper[4829]: E0217 17:09:55.279971 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.012220 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212636 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.220240 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97" (OuterVolumeSpecName: "kube-api-access-nqz97") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "kube-api-access-nqz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.316140 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.345740 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.346140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory" (OuterVolumeSpecName: "inventory") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.418377 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.418418 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586536 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerDied","Data":"c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2"} Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586587 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586607 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:10:03 crc kubenswrapper[4829]: E0217 17:10:03.282269 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:06 crc kubenswrapper[4829]: E0217 17:10:06.283123 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:09 crc kubenswrapper[4829]: I0217 17:10:09.279780 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:09 crc kubenswrapper[4829]: E0217 17:10:09.280634 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:16 crc kubenswrapper[4829]: E0217 17:10:16.284720 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:17 crc kubenswrapper[4829]: E0217 17:10:17.281409 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:24 crc kubenswrapper[4829]: I0217 17:10:24.280750 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:24 crc kubenswrapper[4829]: E0217 17:10:24.283185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:30 crc kubenswrapper[4829]: E0217 17:10:30.283212 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:31 crc kubenswrapper[4829]: E0217 17:10:31.282663 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:36 crc kubenswrapper[4829]: I0217 17:10:36.279487 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:36 crc kubenswrapper[4829]: E0217 17:10:36.280597 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:43 crc kubenswrapper[4829]: E0217 17:10:43.281023 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:46 crc kubenswrapper[4829]: E0217 17:10:46.281292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:51 crc kubenswrapper[4829]: I0217 17:10:51.280123 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:51 crc kubenswrapper[4829]: E0217 17:10:51.280854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:58 crc kubenswrapper[4829]: E0217 17:10:58.288448 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:01 crc kubenswrapper[4829]: E0217 17:11:01.283426 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:06 crc kubenswrapper[4829]: I0217 17:11:06.279177 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:06 crc kubenswrapper[4829]: E0217 17:11:06.279985 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:11:11 crc kubenswrapper[4829]: E0217 17:11:11.281749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:15 crc kubenswrapper[4829]: E0217 17:11:15.282588 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:20 crc kubenswrapper[4829]: I0217 17:11:20.279809 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:20 crc kubenswrapper[4829]: E0217 17:11:20.280717 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:11:26 crc kubenswrapper[4829]: E0217 17:11:26.282718 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:26 crc kubenswrapper[4829]: E0217 17:11:26.282746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:35 crc kubenswrapper[4829]: I0217 17:11:35.279482 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:36 crc kubenswrapper[4829]: I0217 17:11:36.664892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} Feb 17 17:11:37 crc kubenswrapper[4829]: E0217 17:11:37.281457 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:41 crc kubenswrapper[4829]: E0217 17:11:41.283760 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:51 crc kubenswrapper[4829]: E0217 17:11:51.282519 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:54 crc kubenswrapper[4829]: E0217 17:11:54.286610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:04 crc kubenswrapper[4829]: E0217 17:12:04.283453 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:09 crc kubenswrapper[4829]: E0217 17:12:09.284838 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:15 crc kubenswrapper[4829]: E0217 17:12:15.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:20 crc kubenswrapper[4829]: E0217 17:12:20.282969 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:27 crc kubenswrapper[4829]: E0217 17:12:27.281915 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:31 crc kubenswrapper[4829]: E0217 17:12:31.281854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:40 crc kubenswrapper[4829]: E0217 17:12:40.282336 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:46 crc kubenswrapper[4829]: E0217 17:12:46.282372 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:54 crc kubenswrapper[4829]: E0217 17:12:54.281631 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.750948 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:12:59 crc kubenswrapper[4829]: E0217 17:12:59.752028 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.752045 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.752478 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.756640 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.769971 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014139 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.015058 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.015879 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.046385 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.077006 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: E0217 17:13:00.304566 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.718017 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592608 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" exitCode=0 Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592713 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898"} Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"dbfa6c8eeaf887ff64e6cd6c0e72bc752700665669daf4488fa17d9addbe5bd5"} Feb 17 17:13:03 crc kubenswrapper[4829]: I0217 17:13:03.619691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} Feb 17 17:13:04 crc kubenswrapper[4829]: I0217 17:13:04.633438 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" exitCode=0 Feb 17 17:13:04 crc kubenswrapper[4829]: I0217 17:13:04.633542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} Feb 17 17:13:06 crc kubenswrapper[4829]: E0217 17:13:06.281095 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:06 crc kubenswrapper[4829]: I0217 17:13:06.663643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} Feb 17 17:13:06 crc kubenswrapper[4829]: I0217 17:13:06.693197 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvpv6" podStartSLOduration=3.912346909 podStartE2EDuration="7.693171606s" podCreationTimestamp="2026-02-17 17:12:59 +0000 UTC" firstStartedPulling="2026-02-17 17:13:01.596368279 +0000 UTC m=+4694.013386267" lastFinishedPulling="2026-02-17 17:13:05.377192986 +0000 UTC m=+4697.794210964" observedRunningTime="2026-02-17 17:13:06.685053636 +0000 UTC m=+4699.102071634" watchObservedRunningTime="2026-02-17 17:13:06.693171606 +0000 UTC m=+4699.110189584" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.078179 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.078518 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.141180 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.786445 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.849610 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:12 crc kubenswrapper[4829]: I0217 17:13:12.752814 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvpv6" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" containerID="cri-o://d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" gracePeriod=2 Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.314782 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.364836 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.365015 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.365380 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.367347 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities" (OuterVolumeSpecName: "utilities") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.370560 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.371674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc" (OuterVolumeSpecName: "kube-api-access-ct4bc") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "kube-api-access-ct4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.435079 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.472823 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.472875 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.762916 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" exitCode=0 Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.762999 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.764670 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"dbfa6c8eeaf887ff64e6cd6c0e72bc752700665669daf4488fa17d9addbe5bd5"} Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.763009 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.764716 4829 scope.go:117] "RemoveContainer" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.801115 4829 scope.go:117] "RemoveContainer" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.805904 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.818197 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.826472 4829 scope.go:117] "RemoveContainer" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.875434 4829 scope.go:117] "RemoveContainer" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.876153 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": container with ID starting with d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7 not found: ID does not exist" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876216 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} err="failed to get container status \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": rpc error: code = NotFound desc = could not find container \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": container with ID starting with d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7 not found: ID does not exist" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876259 4829 scope.go:117] "RemoveContainer" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.876841 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": container with ID starting with b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058 not found: ID does not exist" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876876 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} err="failed to get container status \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": rpc error: code = NotFound desc = could not find container \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": container with ID starting with b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058 not found: ID does not exist" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876900 4829 scope.go:117] "RemoveContainer" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.877328 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": container with ID starting with ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898 not found: ID does not exist" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.877358 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898"} err="failed to get container status \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": rpc error: code = NotFound desc = could not find container \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": container with ID starting with ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898 not found: ID does not exist" Feb 17 17:13:14 crc kubenswrapper[4829]: E0217 17:13:14.287584 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:14 crc kubenswrapper[4829]: I0217 17:13:14.296843 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" path="/var/lib/kubelet/pods/0129998b-a7ba-43ce-be38-40e50b1fd26d/volumes" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.249084 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250249 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-content" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250268 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-content" Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250282 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250290 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250299 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-utilities" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250307 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-utilities" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250610 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.252940 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.261779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361734 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.465033 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.487362 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.595365 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.184318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:19 crc kubenswrapper[4829]: W0217 17:13:19.191550 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3af34c_4b38_44da_a726_72f1565c3fc8.slice/crio-a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e WatchSource:0}: Error finding container a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e: Status 404 returned error can't find the container with id a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e Feb 17 17:13:19 crc kubenswrapper[4829]: E0217 17:13:19.283013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.824864 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" exitCode=0 Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.824953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a"} Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.825143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e"} Feb 17 17:13:21 crc kubenswrapper[4829]: I0217 17:13:21.887970 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} Feb 17 17:13:23 crc kubenswrapper[4829]: I0217 17:13:23.909623 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" exitCode=0 Feb 17 17:13:23 crc kubenswrapper[4829]: I0217 17:13:23.909704 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} Feb 17 17:13:24 crc kubenswrapper[4829]: I0217 17:13:24.924187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} Feb 17 17:13:24 crc kubenswrapper[4829]: I0217 17:13:24.963443 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxd6h" podStartSLOduration=2.448047371 podStartE2EDuration="6.963421364s" podCreationTimestamp="2026-02-17 17:13:18 +0000 UTC" firstStartedPulling="2026-02-17 17:13:19.830368438 +0000 UTC m=+4712.247386416" lastFinishedPulling="2026-02-17 17:13:24.345742431 +0000 UTC m=+4716.762760409" observedRunningTime="2026-02-17 17:13:24.956725314 +0000 UTC m=+4717.373743302" watchObservedRunningTime="2026-02-17 17:13:24.963421364 +0000 UTC m=+4717.380439342" Feb 17 17:13:27 crc kubenswrapper[4829]: E0217 17:13:27.283283 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.596229 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.596549 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.644207 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:32 crc kubenswrapper[4829]: E0217 17:13:32.284002 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:38 crc kubenswrapper[4829]: I0217 17:13:38.670173 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:38 crc kubenswrapper[4829]: I0217 17:13:38.726120 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.061281 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxd6h" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" containerID="cri-o://b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" gracePeriod=2 Feb 17 17:13:39 crc kubenswrapper[4829]: E0217 17:13:39.287593 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.651731 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.734998 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.735210 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.735236 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.739493 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities" (OuterVolumeSpecName: "utilities") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.744597 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x" (OuterVolumeSpecName: "kube-api-access-ztv4x") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "kube-api-access-ztv4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.764820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838867 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838901 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838912 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.072985 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" exitCode=0 Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073037 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073050 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e"} Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073099 4829 scope.go:117] "RemoveContainer" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.103879 4829 scope.go:117] "RemoveContainer" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.119384 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.128731 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.130882 4829 scope.go:117] "RemoveContainer" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.181491 4829 scope.go:117] "RemoveContainer" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182061 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": container with ID starting with b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9 not found: ID does not exist" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182093 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} err="failed to get container status \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": rpc error: code = NotFound desc = could not find container \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": container with ID starting with b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9 not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182121 4829 scope.go:117] "RemoveContainer" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182417 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": container with ID starting with 52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28 not found: ID does not exist" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182449 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} err="failed to get container status \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": rpc error: code = NotFound desc = could not find container \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": container with ID starting with 52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28 not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182471 4829 scope.go:117] "RemoveContainer" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182822 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": container with ID starting with 784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a not found: ID does not exist" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182872 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a"} err="failed to get container status \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": rpc error: code = NotFound desc = could not find container \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": container with ID starting with 784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.293006 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" path="/var/lib/kubelet/pods/cd3af34c-4b38-44da-a726-72f1565c3fc8/volumes" Feb 17 17:13:47 crc kubenswrapper[4829]: E0217 17:13:47.281988 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:51 crc kubenswrapper[4829]: E0217 17:13:51.283375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:52 crc kubenswrapper[4829]: I0217 17:13:52.425067 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:13:52 crc kubenswrapper[4829]: I0217 17:13:52.425467 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.292725 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293280 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-utilities" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293304 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-utilities" Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293327 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293335 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293358 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-content" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293366 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-content" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293682 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.295950 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.314836 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379657 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379895 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481572 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481646 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481719 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.482157 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.482190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.657803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.922747 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:54 crc kubenswrapper[4829]: I0217 17:13:54.455460 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:54 crc kubenswrapper[4829]: W0217 17:13:54.461506 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a19588b_3fe9_4064_8fc0_b9053f7efdf8.slice/crio-f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece WatchSource:0}: Error finding container f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece: Status 404 returned error can't find the container with id f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236853 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad" exitCode=0 Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad"} Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236932 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece"} Feb 17 17:13:56 crc kubenswrapper[4829]: I0217 17:13:56.250323 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d"} Feb 17 17:13:58 crc kubenswrapper[4829]: I0217 17:13:58.274952 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d" exitCode=0 Feb 17 17:13:58 crc kubenswrapper[4829]: I0217 17:13:58.275047 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d"} Feb 17 17:13:58 crc kubenswrapper[4829]: E0217 17:13:58.280837 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:59 crc kubenswrapper[4829]: I0217 17:13:59.291434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99"} Feb 17 17:13:59 crc kubenswrapper[4829]: I0217 17:13:59.318258 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vztn2" podStartSLOduration=2.8591588100000003 podStartE2EDuration="6.31823727s" podCreationTimestamp="2026-02-17 17:13:53 +0000 UTC" firstStartedPulling="2026-02-17 17:13:55.239247316 +0000 UTC m=+4747.656265294" lastFinishedPulling="2026-02-17 17:13:58.698325776 +0000 UTC m=+4751.115343754" observedRunningTime="2026-02-17 17:13:59.314730306 +0000 UTC m=+4751.731748284" watchObservedRunningTime="2026-02-17 17:13:59.31823727 +0000 UTC m=+4751.735255248" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.870198 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.873970 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.884795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.945688 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.945832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.946143 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.050282 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.050908 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051404 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051974 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.450932 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.513485 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.923712 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.924083 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.986689 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.090905 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.363824 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.363869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"672a375342d36e66eb68b76fc86c5bb513917a01ecc45fb25bf0a6473d4b6768"} Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.424121 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.374539 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" exitCode=0 Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.374620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.377204 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.234508 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.392837 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.393520 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vztn2" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" containerID="cri-o://8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" gracePeriod=2 Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402404 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402450 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402588 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.403777 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:07 crc kubenswrapper[4829]: I0217 17:14:07.404857 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" exitCode=0 Feb 17 17:14:07 crc kubenswrapper[4829]: I0217 17:14:07.405055 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99"} Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.129657 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322139 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322356 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322465 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.323525 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities" (OuterVolumeSpecName: "utilities") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.328256 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s" (OuterVolumeSpecName: "kube-api-access-d9d5s") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "kube-api-access-d9d5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece"} Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418250 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418912 4829 scope.go:117] "RemoveContainer" containerID="8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.425354 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.425384 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.445256 4829 scope.go:117] "RemoveContainer" containerID="755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.482441 4829 scope.go:117] "RemoveContainer" containerID="bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.776160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.839315 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:09 crc kubenswrapper[4829]: I0217 17:14:09.058222 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:09 crc kubenswrapper[4829]: I0217 17:14:09.068611 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:10 crc kubenswrapper[4829]: I0217 17:14:10.295099 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" path="/var/lib/kubelet/pods/7a19588b-3fe9-4064-8fc0-b9053f7efdf8/volumes" Feb 17 17:14:12 crc kubenswrapper[4829]: I0217 17:14:12.458783 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" exitCode=0 Feb 17 17:14:12 crc kubenswrapper[4829]: I0217 17:14:12.458868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.388651 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.389003 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.389153 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.390725 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.472219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.498468 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n675c" podStartSLOduration=4.017884628 podStartE2EDuration="11.498449197s" podCreationTimestamp="2026-02-17 17:14:02 +0000 UTC" firstStartedPulling="2026-02-17 17:14:05.376951904 +0000 UTC m=+4757.793969882" lastFinishedPulling="2026-02-17 17:14:12.857516453 +0000 UTC m=+4765.274534451" observedRunningTime="2026-02-17 17:14:13.489456843 +0000 UTC m=+4765.906474821" watchObservedRunningTime="2026-02-17 17:14:13.498449197 +0000 UTC m=+4765.915467175" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.514906 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.514959 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:14 crc kubenswrapper[4829]: I0217 17:14:14.569216 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n675c" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" probeResult="failure" output=< Feb 17 17:14:14 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:14:14 crc kubenswrapper[4829]: > Feb 17 17:14:17 crc kubenswrapper[4829]: E0217 17:14:17.281533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:22 crc kubenswrapper[4829]: I0217 17:14:22.425502 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:22 crc kubenswrapper[4829]: I0217 17:14:22.426151 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.467167 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.524361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.706504 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:25 crc kubenswrapper[4829]: I0217 17:14:25.617490 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n675c" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" containerID="cri-o://3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" gracePeriod=2 Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.628350 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631124 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" exitCode=0 Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631163 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631172 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631204 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"672a375342d36e66eb68b76fc86c5bb513917a01ecc45fb25bf0a6473d4b6768"} Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631225 4829 scope.go:117] "RemoveContainer" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.660680 4829 scope.go:117] "RemoveContainer" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.697353 4829 scope.go:117] "RemoveContainer" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748341 4829 scope.go:117] "RemoveContainer" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.748694 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": container with ID starting with 3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7 not found: ID does not exist" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748722 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} err="failed to get container status \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": rpc error: code = NotFound desc = could not find container \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": container with ID starting with 3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7 not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748744 4829 scope.go:117] "RemoveContainer" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.749045 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": container with ID starting with f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a not found: ID does not exist" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749084 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} err="failed to get container status \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": rpc error: code = NotFound desc = could not find container \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": container with ID starting with f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749109 4829 scope.go:117] "RemoveContainer" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.749341 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": container with ID starting with d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef not found: ID does not exist" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749373 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} err="failed to get container status \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": rpc error: code = NotFound desc = could not find container \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": container with ID starting with d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.754767 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.754839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.755111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.755874 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities" (OuterVolumeSpecName: "utilities") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.762518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6" (OuterVolumeSpecName: "kube-api-access-tkss6") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "kube-api-access-tkss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.858936 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.858976 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.899674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.966919 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.975263 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.988047 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:28 crc kubenswrapper[4829]: E0217 17:14:28.289483 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:28 crc kubenswrapper[4829]: I0217 17:14:28.295883 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" path="/var/lib/kubelet/pods/9fcf4ba0-36bd-4bfe-89aa-b295791b5961/volumes" Feb 17 17:14:31 crc kubenswrapper[4829]: E0217 17:14:31.281291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:41 crc kubenswrapper[4829]: E0217 17:14:41.284009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:42 crc kubenswrapper[4829]: E0217 17:14:42.284034 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.424473 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.426208 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.426285 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.427552 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.427695 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" gracePeriod=600 Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916723 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" exitCode=0 Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916778 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916825 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:14:53 crc kubenswrapper[4829]: E0217 17:14:53.280375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:53 crc kubenswrapper[4829]: E0217 17:14:53.280375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:53 crc kubenswrapper[4829]: I0217 17:14:53.928668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.163006 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164120 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164136 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164154 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164160 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164168 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164175 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164192 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164198 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164218 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164225 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164241 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164247 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164465 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164478 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.165400 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.169639 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.169854 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174915 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.194772 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277326 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277386 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.278541 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.292054 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.304372 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.497183 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:01 crc kubenswrapper[4829]: I0217 17:15:01.035061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.018203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerStarted","Data":"df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e"} Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.019919 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerStarted","Data":"9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021"} Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.037144 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" podStartSLOduration=2.037124687 podStartE2EDuration="2.037124687s" podCreationTimestamp="2026-02-17 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:15:02.03503041 +0000 UTC m=+4814.452048408" watchObservedRunningTime="2026-02-17 17:15:02.037124687 +0000 UTC m=+4814.454142665" Feb 17 17:15:03 crc kubenswrapper[4829]: I0217 17:15:03.033360 4829 generic.go:334] "Generic (PLEG): container finished" podID="fe68a533-c785-4f43-bee6-b83031125f08" containerID="df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e" exitCode=0 Feb 17 17:15:03 crc kubenswrapper[4829]: I0217 17:15:03.033446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerDied","Data":"df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e"} Feb 17 17:15:04 crc kubenswrapper[4829]: E0217 17:15:04.284459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.464967 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608307 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608590 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608655 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.609196 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.615289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.615340 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq" (OuterVolumeSpecName: "kube-api-access-r86rq") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "kube-api-access-r86rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711186 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711234 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711249 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.063920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerDied","Data":"9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021"} Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.064205 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.064268 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.146617 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.164356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 17:15:06 crc kubenswrapper[4829]: E0217 17:15:06.281926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:06 crc kubenswrapper[4829]: I0217 17:15:06.292675 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3000c07b-e126-4f72-9667-251ca9a53989" path="/var/lib/kubelet/pods/3000c07b-e126-4f72-9667-251ca9a53989/volumes" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.046346 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:13 crc kubenswrapper[4829]: E0217 17:15:13.048460 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.048564 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.049009 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.050300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054109 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054128 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054363 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054548 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.057361 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136220 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238456 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.244250 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.245150 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.256024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.404439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:14 crc kubenswrapper[4829]: I0217 17:15:14.587287 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.486799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerStarted","Data":"b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55"} Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.487190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerStarted","Data":"ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752"} Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.510367 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" podStartSLOduration=2.045451122 podStartE2EDuration="2.510344521s" podCreationTimestamp="2026-02-17 17:15:13 +0000 UTC" firstStartedPulling="2026-02-17 17:15:14.592420444 +0000 UTC m=+4827.009438422" lastFinishedPulling="2026-02-17 17:15:15.057313843 +0000 UTC m=+4827.474331821" observedRunningTime="2026-02-17 17:15:15.499752505 +0000 UTC m=+4827.916770493" watchObservedRunningTime="2026-02-17 17:15:15.510344521 +0000 UTC m=+4827.927362499" Feb 17 17:15:17 crc kubenswrapper[4829]: E0217 17:15:17.280795 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:19 crc kubenswrapper[4829]: I0217 17:15:19.887401 4829 scope.go:117] "RemoveContainer" containerID="95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f" Feb 17 17:15:20 crc kubenswrapper[4829]: E0217 17:15:20.282156 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:31 crc kubenswrapper[4829]: E0217 17:15:31.281738 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:35 crc kubenswrapper[4829]: E0217 17:15:35.282471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:44 crc kubenswrapper[4829]: E0217 17:15:44.283763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:46 crc kubenswrapper[4829]: E0217 17:15:46.281708 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:55 crc kubenswrapper[4829]: E0217 17:15:55.284091 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:59 crc kubenswrapper[4829]: E0217 17:15:59.283164 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:10 crc kubenswrapper[4829]: E0217 17:16:10.288367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:10 crc kubenswrapper[4829]: E0217 17:16:10.289815 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:21 crc kubenswrapper[4829]: E0217 17:16:21.281823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:22 crc kubenswrapper[4829]: E0217 17:16:22.281683 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:33 crc kubenswrapper[4829]: E0217 17:16:33.282646 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:37 crc kubenswrapper[4829]: E0217 17:16:37.286277 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:44 crc kubenswrapper[4829]: E0217 17:16:44.282810 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:52 crc kubenswrapper[4829]: E0217 17:16:52.285662 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:56 crc kubenswrapper[4829]: E0217 17:16:56.284129 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:03 crc kubenswrapper[4829]: E0217 17:17:03.281880 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:11 crc kubenswrapper[4829]: E0217 17:17:11.282089 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:15 crc kubenswrapper[4829]: E0217 17:17:15.282504 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:22 crc kubenswrapper[4829]: I0217 17:17:22.424864 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:22 crc kubenswrapper[4829]: I0217 17:17:22.425408 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:23 crc kubenswrapper[4829]: E0217 17:17:23.281496 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:28 crc kubenswrapper[4829]: E0217 17:17:28.291967 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:36 crc kubenswrapper[4829]: E0217 17:17:36.282713 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:41 crc kubenswrapper[4829]: E0217 17:17:41.282313 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:51 crc kubenswrapper[4829]: E0217 17:17:51.282187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:52 crc kubenswrapper[4829]: I0217 17:17:52.424334 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:52 crc kubenswrapper[4829]: I0217 17:17:52.424401 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:56 crc kubenswrapper[4829]: E0217 17:17:56.283478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:05 crc kubenswrapper[4829]: E0217 17:18:05.281822 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:10 crc kubenswrapper[4829]: E0217 17:18:10.284779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:20 crc kubenswrapper[4829]: E0217 17:18:20.281878 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.424938 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.425439 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.425484 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.426372 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.426428 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" gracePeriod=600 Feb 17 17:18:22 crc kubenswrapper[4829]: E0217 17:18:22.559903 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.555672 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" exitCode=0 Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.555764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.556021 4829 scope.go:117] "RemoveContainer" containerID="e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.557040 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:23 crc kubenswrapper[4829]: E0217 17:18:23.557507 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:24 crc kubenswrapper[4829]: E0217 17:18:24.283388 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:35 crc kubenswrapper[4829]: E0217 17:18:35.281505 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:36 crc kubenswrapper[4829]: E0217 17:18:36.281602 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:38 crc kubenswrapper[4829]: I0217 17:18:38.279508 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:38 crc kubenswrapper[4829]: E0217 17:18:38.280427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:46 crc kubenswrapper[4829]: E0217 17:18:46.281829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:50 crc kubenswrapper[4829]: E0217 17:18:50.285412 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:51 crc kubenswrapper[4829]: I0217 17:18:51.279597 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:51 crc kubenswrapper[4829]: E0217 17:18:51.280220 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:58 crc kubenswrapper[4829]: E0217 17:18:58.288273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:03 crc kubenswrapper[4829]: I0217 17:19:03.279568 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:03 crc kubenswrapper[4829]: E0217 17:19:03.281092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:04 crc kubenswrapper[4829]: E0217 17:19:04.281559 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:10 crc kubenswrapper[4829]: I0217 17:19:10.284752 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409396 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409714 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409850 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.411117 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.380303 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.380823 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.381190 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.382990 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:18 crc kubenswrapper[4829]: I0217 17:19:18.290624 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:18 crc kubenswrapper[4829]: E0217 17:19:18.291900 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:22 crc kubenswrapper[4829]: E0217 17:19:22.283675 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:28 crc kubenswrapper[4829]: E0217 17:19:28.296687 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:33 crc kubenswrapper[4829]: I0217 17:19:33.279784 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:33 crc kubenswrapper[4829]: E0217 17:19:33.280718 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:37 crc kubenswrapper[4829]: E0217 17:19:37.281456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:40 crc kubenswrapper[4829]: E0217 17:19:40.286447 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:47 crc kubenswrapper[4829]: I0217 17:19:47.280339 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:47 crc kubenswrapper[4829]: E0217 17:19:47.281318 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:50 crc kubenswrapper[4829]: E0217 17:19:50.289337 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:53 crc kubenswrapper[4829]: E0217 17:19:53.282285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:00 crc kubenswrapper[4829]: I0217 17:20:00.280211 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:00 crc kubenswrapper[4829]: E0217 17:20:00.281020 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:03 crc kubenswrapper[4829]: E0217 17:20:03.281779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:08 crc kubenswrapper[4829]: E0217 17:20:08.290722 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:12 crc kubenswrapper[4829]: I0217 17:20:12.280491 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:12 crc kubenswrapper[4829]: E0217 17:20:12.280763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:16 crc kubenswrapper[4829]: E0217 17:20:16.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:21 crc kubenswrapper[4829]: E0217 17:20:21.282103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:25 crc kubenswrapper[4829]: I0217 17:20:25.279862 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:25 crc kubenswrapper[4829]: E0217 17:20:25.280782 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:30 crc kubenswrapper[4829]: E0217 17:20:30.282614 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:34 crc kubenswrapper[4829]: E0217 17:20:34.283103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:38 crc kubenswrapper[4829]: I0217 17:20:38.287750 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:38 crc kubenswrapper[4829]: E0217 17:20:38.288511 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:41 crc kubenswrapper[4829]: E0217 17:20:41.281323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:46 crc kubenswrapper[4829]: E0217 17:20:46.286943 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:52 crc kubenswrapper[4829]: I0217 17:20:52.281937 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:52 crc kubenswrapper[4829]: E0217 17:20:52.282762 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:55 crc kubenswrapper[4829]: E0217 17:20:55.282321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:57 crc kubenswrapper[4829]: E0217 17:20:57.287128 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:04 crc kubenswrapper[4829]: I0217 17:21:04.279356 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:04 crc kubenswrapper[4829]: E0217 17:21:04.280397 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:06 crc kubenswrapper[4829]: E0217 17:21:06.282546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:10 crc kubenswrapper[4829]: E0217 17:21:10.282335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:15 crc kubenswrapper[4829]: I0217 17:21:15.280013 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:15 crc kubenswrapper[4829]: E0217 17:21:15.280692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:18 crc kubenswrapper[4829]: E0217 17:21:18.290923 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:25 crc kubenswrapper[4829]: E0217 17:21:25.281540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:28 crc kubenswrapper[4829]: I0217 17:21:28.295223 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:28 crc kubenswrapper[4829]: E0217 17:21:28.296945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:29 crc kubenswrapper[4829]: I0217 17:21:29.471285 4829 generic.go:334] "Generic (PLEG): container finished" podID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerID="b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55" exitCode=2 Feb 17 17:21:29 crc kubenswrapper[4829]: I0217 17:21:29.471457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerDied","Data":"b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55"} Feb 17 17:21:30 crc kubenswrapper[4829]: I0217 17:21:30.976260 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056337 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056753 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056835 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.063060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq" (OuterVolumeSpecName: "kube-api-access-5zxvq") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "kube-api-access-5zxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.087632 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory" (OuterVolumeSpecName: "inventory") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.091207 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160820 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160867 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160884 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492479 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerDied","Data":"ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752"} Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492512 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492617 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:21:33 crc kubenswrapper[4829]: E0217 17:21:33.281732 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:36 crc kubenswrapper[4829]: E0217 17:21:36.283319 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:40 crc kubenswrapper[4829]: I0217 17:21:40.281179 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:40 crc kubenswrapper[4829]: E0217 17:21:40.282042 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:48 crc kubenswrapper[4829]: E0217 17:21:48.294527 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:50 crc kubenswrapper[4829]: E0217 17:21:50.282188 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:52 crc kubenswrapper[4829]: I0217 17:21:52.279400 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:52 crc kubenswrapper[4829]: E0217 17:21:52.280044 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:59 crc kubenswrapper[4829]: E0217 17:21:59.287409 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:02 crc kubenswrapper[4829]: E0217 17:22:02.283342 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:06 crc kubenswrapper[4829]: I0217 17:22:06.280246 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:06 crc kubenswrapper[4829]: E0217 17:22:06.281102 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:14 crc kubenswrapper[4829]: E0217 17:22:14.341390 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:14 crc kubenswrapper[4829]: E0217 17:22:14.341483 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:18 crc kubenswrapper[4829]: I0217 17:22:18.286756 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:18 crc kubenswrapper[4829]: E0217 17:22:18.287321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:26 crc kubenswrapper[4829]: E0217 17:22:26.280989 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:28 crc kubenswrapper[4829]: E0217 17:22:28.291084 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:30 crc kubenswrapper[4829]: I0217 17:22:30.280118 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:30 crc kubenswrapper[4829]: E0217 17:22:30.281004 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:39 crc kubenswrapper[4829]: E0217 17:22:39.288064 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:41 crc kubenswrapper[4829]: E0217 17:22:41.282839 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:45 crc kubenswrapper[4829]: I0217 17:22:45.280072 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:45 crc kubenswrapper[4829]: E0217 17:22:45.281944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:54 crc kubenswrapper[4829]: E0217 17:22:54.281309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:54 crc kubenswrapper[4829]: E0217 17:22:54.281346 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:57 crc kubenswrapper[4829]: I0217 17:22:57.280548 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:57 crc kubenswrapper[4829]: E0217 17:22:57.281413 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:08 crc kubenswrapper[4829]: E0217 17:23:08.289749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:09 crc kubenswrapper[4829]: I0217 17:23:09.282504 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:09 crc kubenswrapper[4829]: E0217 17:23:09.283159 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:09 crc kubenswrapper[4829]: E0217 17:23:09.286261 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.275366 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:10 crc kubenswrapper[4829]: E0217 17:23:10.276193 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.276210 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.276449 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.277794 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.284650 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmblp"/"openshift-service-ca.crt" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.285153 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bmblp"/"default-dockercfg-kqp75" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.285698 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmblp"/"kube-root-ca.crt" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.295782 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.374473 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.375100 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.478161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.479080 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.479476 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.501276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.609859 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:11 crc kubenswrapper[4829]: I0217 17:23:11.338550 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:11 crc kubenswrapper[4829]: I0217 17:23:11.560155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"36f177bc87d78b91e8368779591515fa213a4d940eb62236187acd5077b3fd85"} Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.497259 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.619438 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.619601 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.790484 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.791019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.791323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.893677 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894849 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.915034 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.956282 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:20 crc kubenswrapper[4829]: E0217 17:23:20.281088 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.333444 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:20 crc kubenswrapper[4829]: W0217 17:23:20.343795 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ec332b_ef73_41c2_8ece_63d68db3a6ac.slice/crio-fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909 WatchSource:0}: Error finding container fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909: Status 404 returned error can't find the container with id fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909 Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.668241 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.668286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673331 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c" exitCode=0 Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673376 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673398 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.688376 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmblp/must-gather-bqwqp" podStartSLOduration=2.14404433 podStartE2EDuration="10.688353595s" podCreationTimestamp="2026-02-17 17:23:10 +0000 UTC" firstStartedPulling="2026-02-17 17:23:11.344905116 +0000 UTC m=+5303.761923104" lastFinishedPulling="2026-02-17 17:23:19.889214391 +0000 UTC m=+5312.306232369" observedRunningTime="2026-02-17 17:23:20.684385687 +0000 UTC m=+5313.101403685" watchObservedRunningTime="2026-02-17 17:23:20.688353595 +0000 UTC m=+5313.105371573" Feb 17 17:23:21 crc kubenswrapper[4829]: I0217 17:23:21.280141 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:21 crc kubenswrapper[4829]: E0217 17:23:21.280743 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:22 crc kubenswrapper[4829]: E0217 17:23:22.281698 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:22 crc kubenswrapper[4829]: I0217 17:23:22.698631 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9"} Feb 17 17:23:23 crc kubenswrapper[4829]: E0217 17:23:23.523627 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ec332b_ef73_41c2_8ece_63d68db3a6ac.slice/crio-db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:23:23 crc kubenswrapper[4829]: I0217 17:23:23.712637 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9" exitCode=0 Feb 17 17:23:23 crc kubenswrapper[4829]: I0217 17:23:23.712695 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9"} Feb 17 17:23:25 crc kubenswrapper[4829]: I0217 17:23:25.738116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105"} Feb 17 17:23:25 crc kubenswrapper[4829]: I0217 17:23:25.764871 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sx62" podStartSLOduration=7.305615414 podStartE2EDuration="11.764849353s" podCreationTimestamp="2026-02-17 17:23:14 +0000 UTC" firstStartedPulling="2026-02-17 17:23:20.675869964 +0000 UTC m=+5313.092887942" lastFinishedPulling="2026-02-17 17:23:25.135103893 +0000 UTC m=+5317.552121881" observedRunningTime="2026-02-17 17:23:25.756763792 +0000 UTC m=+5318.173781790" watchObservedRunningTime="2026-02-17 17:23:25.764849353 +0000 UTC m=+5318.181867331" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.179348 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.181414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.300880 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.301252 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.432787 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.502991 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.776392 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerStarted","Data":"2766fa515c7c5536d5585a5a1b48c5ea41cda2a43fa25926248336cd2b999247"} Feb 17 17:23:33 crc kubenswrapper[4829]: I0217 17:23:33.281634 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:33 crc kubenswrapper[4829]: E0217 17:23:33.284861 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:33 crc kubenswrapper[4829]: I0217 17:23:33.859371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} Feb 17 17:23:34 crc kubenswrapper[4829]: I0217 17:23:34.956596 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:34 crc kubenswrapper[4829]: I0217 17:23:34.957178 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:35 crc kubenswrapper[4829]: I0217 17:23:35.097255 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:35 crc kubenswrapper[4829]: E0217 17:23:35.283741 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:35 crc kubenswrapper[4829]: I0217 17:23:35.948104 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:36 crc kubenswrapper[4829]: I0217 17:23:36.005491 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:37 crc kubenswrapper[4829]: I0217 17:23:37.908495 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sx62" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" containerID="cri-o://bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" gracePeriod=2 Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.063023 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.066224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.076090 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173478 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275514 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275671 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275705 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.276237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.278094 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.313873 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.398785 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.925628 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" exitCode=0 Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.925918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105"} Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.335715 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.481410 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510318 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.512054 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities" (OuterVolumeSpecName: "utilities") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.517705 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx" (OuterVolumeSpecName: "kube-api-access-msppx") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "kube-api-access-msppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.572160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.613740 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.614074 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.614086 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007777 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606" exitCode=0 Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerStarted","Data":"4660542cd1e6d1038696b3b3c19f270dc14e3e7daa0c7a582a55fec95b5904de"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010209 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010265 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010360 4829 scope.go:117] "RemoveContainer" containerID="bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.012988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerStarted","Data":"829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.042993 4829 scope.go:117] "RemoveContainer" containerID="db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.068431 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" podStartSLOduration=1.6925776639999999 podStartE2EDuration="15.068410891s" podCreationTimestamp="2026-02-17 17:23:28 +0000 UTC" firstStartedPulling="2026-02-17 17:23:28.540888038 +0000 UTC m=+5320.957906016" lastFinishedPulling="2026-02-17 17:23:41.916721265 +0000 UTC m=+5334.333739243" observedRunningTime="2026-02-17 17:23:43.046163093 +0000 UTC m=+5335.463181081" watchObservedRunningTime="2026-02-17 17:23:43.068410891 +0000 UTC m=+5335.485428869" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.082920 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.092826 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.563434 4829 scope.go:117] "RemoveContainer" containerID="ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c" Feb 17 17:23:44 crc kubenswrapper[4829]: I0217 17:23:44.294964 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" path="/var/lib/kubelet/pods/f6ec332b-ef73-41c2-8ece-63d68db3a6ac/volumes" Feb 17 17:23:45 crc kubenswrapper[4829]: I0217 17:23:45.037370 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8" exitCode=0 Feb 17 17:23:45 crc kubenswrapper[4829]: I0217 17:23:45.037422 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8"} Feb 17 17:23:45 crc kubenswrapper[4829]: E0217 17:23:45.281522 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:46 crc kubenswrapper[4829]: E0217 17:23:46.281993 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:47 crc kubenswrapper[4829]: I0217 17:23:47.061033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerStarted","Data":"ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681"} Feb 17 17:23:47 crc kubenswrapper[4829]: I0217 17:23:47.082977 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdjdf" podStartSLOduration=6.657288181 podStartE2EDuration="9.082947162s" podCreationTimestamp="2026-02-17 17:23:38 +0000 UTC" firstStartedPulling="2026-02-17 17:23:43.009658327 +0000 UTC m=+5335.426676305" lastFinishedPulling="2026-02-17 17:23:45.435317308 +0000 UTC m=+5337.852335286" observedRunningTime="2026-02-17 17:23:47.077816472 +0000 UTC m=+5339.494834460" watchObservedRunningTime="2026-02-17 17:23:47.082947162 +0000 UTC m=+5339.499965140" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.399643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.399962 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.454350 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:57 crc kubenswrapper[4829]: E0217 17:23:57.282000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:58 crc kubenswrapper[4829]: I0217 17:23:58.458209 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:58 crc kubenswrapper[4829]: I0217 17:23:58.542517 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:59 crc kubenswrapper[4829]: I0217 17:23:59.203624 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdjdf" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" containerID="cri-o://ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" gracePeriod=2 Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.281089 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" exitCode=0 Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.340184 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681"} Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.457857 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.517392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.518025 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.518103 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.536687 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities" (OuterVolumeSpecName: "utilities") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.543096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh" (OuterVolumeSpecName: "kube-api-access-df5hh") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "kube-api-access-df5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.553163 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621446 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621485 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621497 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:01 crc kubenswrapper[4829]: E0217 17:24:01.283126 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.299637 4829 generic.go:334] "Generic (PLEG): container finished" podID="82b76227-c8f4-45e3-a632-0681deb43d58" containerID="829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e" exitCode=0 Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.299790 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerDied","Data":"829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e"} Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304483 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"4660542cd1e6d1038696b3b3c19f270dc14e3e7daa0c7a582a55fec95b5904de"} Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304542 4829 scope.go:117] "RemoveContainer" containerID="ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304765 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.350865 4829 scope.go:117] "RemoveContainer" containerID="e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.360550 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.384895 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.398666 4829 scope.go:117] "RemoveContainer" containerID="942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.296349 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" path="/var/lib/kubelet/pods/58e44360-7cec-4d73-b5a7-1abc208e7e82/volumes" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.433391 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.465886 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"82b76227-c8f4-45e3-a632-0681deb43d58\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.465967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host" (OuterVolumeSpecName: "host") pod "82b76227-c8f4-45e3-a632-0681deb43d58" (UID: "82b76227-c8f4-45e3-a632-0681deb43d58"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.466274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"82b76227-c8f4-45e3-a632-0681deb43d58\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.467213 4829 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.474669 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h" (OuterVolumeSpecName: "kube-api-access-t6x4h") pod "82b76227-c8f4-45e3-a632-0681deb43d58" (UID: "82b76227-c8f4-45e3-a632-0681deb43d58"). InnerVolumeSpecName "kube-api-access-t6x4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.478389 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.504393 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.569516 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.327515 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2766fa515c7c5536d5585a5a1b48c5ea41cda2a43fa25926248336cd2b999247" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.327627 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.720509 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721082 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721104 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721123 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721128 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721135 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721141 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721167 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721173 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721187 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721196 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721218 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721225 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721252 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721259 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721473 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721516 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.722554 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.904486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.904545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.016845 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.016903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.017204 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.065535 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.291095 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" path="/var/lib/kubelet/pods/82b76227-c8f4-45e3-a632-0681deb43d58/volumes" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.342352 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.349102 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" event={"ID":"af8da55a-65a7-46c1-9af1-545ef9cc95bf","Type":"ContainerStarted","Data":"b2824880faea376d2179d0441ab4ac002a31e2603381c7480f1f7942b463f64f"} Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.689056 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.692715 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.702998 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.860590 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.861199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.861543 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963996 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.964184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.964227 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.991645 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.015771 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.370219 4829 generic.go:334] "Generic (PLEG): container finished" podID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerID="ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321" exitCode=1 Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.370450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" event={"ID":"af8da55a-65a7-46c1-9af1-545ef9cc95bf","Type":"ContainerDied","Data":"ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321"} Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.450624 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.471815 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.807279 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.385983 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919"} Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.386303 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01"} Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.602245 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.729247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.729467 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.730798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host" (OuterVolumeSpecName: "host") pod "af8da55a-65a7-46c1-9af1-545ef9cc95bf" (UID: "af8da55a-65a7-46c1-9af1-545ef9cc95bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.749248 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv" (OuterVolumeSpecName: "kube-api-access-96lkv") pod "af8da55a-65a7-46c1-9af1-545ef9cc95bf" (UID: "af8da55a-65a7-46c1-9af1-545ef9cc95bf"). InnerVolumeSpecName "kube-api-access-96lkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.832205 4829 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.832244 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.322442 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" path="/var/lib/kubelet/pods/af8da55a-65a7-46c1-9af1-545ef9cc95bf/volumes" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.324012 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:08 crc kubenswrapper[4829]: E0217 17:24:08.324356 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.324371 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.346745 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.348735 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.353795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.432717 4829 scope.go:117] "RemoveContainer" containerID="ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.432763 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.437395 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919" exitCode=0 Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.437435 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919"} Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.450430 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.450539 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.451977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.557565 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.557957 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.558343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.559363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.561982 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.577953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.696857 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: E0217 17:24:08.725908 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8da55a_65a7_46c1_9af1_545ef9cc95bf.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:24:09 crc kubenswrapper[4829]: I0217 17:24:09.312732 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:09 crc kubenswrapper[4829]: W0217 17:24:09.328605 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b16d00f_7aac_42b2_ba34_9cf5cffbfddc.slice/crio-f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d WatchSource:0}: Error finding container f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d: Status 404 returned error can't find the container with id f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d Feb 17 17:24:09 crc kubenswrapper[4829]: I0217 17:24:09.453636 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d"} Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.471275 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2" exitCode=0 Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.471389 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2"} Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.473755 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.477933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0"} Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.489068 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51"} Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.492642 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0" exitCode=0 Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.492693 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0"} Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.404823 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.405171 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.405311 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.406529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:12 crc kubenswrapper[4829]: I0217 17:24:12.507255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb"} Feb 17 17:24:12 crc kubenswrapper[4829]: I0217 17:24:12.541189 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxv8d" podStartSLOduration=4.088616378 podStartE2EDuration="7.541171559s" podCreationTimestamp="2026-02-17 17:24:05 +0000 UTC" firstStartedPulling="2026-02-17 17:24:08.484027175 +0000 UTC m=+5360.901045153" lastFinishedPulling="2026-02-17 17:24:11.936582356 +0000 UTC m=+5364.353600334" observedRunningTime="2026-02-17 17:24:12.527806805 +0000 UTC m=+5364.944824793" watchObservedRunningTime="2026-02-17 17:24:12.541171559 +0000 UTC m=+5364.958189537" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.017496 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.018058 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.398900 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.399231 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.399394 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.401074 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.507116 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:19 crc kubenswrapper[4829]: I0217 17:24:19.593095 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51" exitCode=0 Feb 17 17:24:19 crc kubenswrapper[4829]: I0217 17:24:19.593405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51"} Feb 17 17:24:20 crc kubenswrapper[4829]: I0217 17:24:20.605338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2"} Feb 17 17:24:20 crc kubenswrapper[4829]: I0217 17:24:20.631637 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-msh7b" podStartSLOduration=3.130686858 podStartE2EDuration="12.631616955s" podCreationTimestamp="2026-02-17 17:24:08 +0000 UTC" firstStartedPulling="2026-02-17 17:24:10.473453879 +0000 UTC m=+5362.890471857" lastFinishedPulling="2026-02-17 17:24:19.974383976 +0000 UTC m=+5372.391401954" observedRunningTime="2026-02-17 17:24:20.621371956 +0000 UTC m=+5373.038389934" watchObservedRunningTime="2026-02-17 17:24:20.631616955 +0000 UTC m=+5373.048634933" Feb 17 17:24:24 crc kubenswrapper[4829]: E0217 17:24:24.283935 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.072889 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.128519 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.678465 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxv8d" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" containerID="cri-o://87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" gracePeriod=2 Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.715800 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" exitCode=0 Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.715883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb"} Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.716053 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01"} Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.716071 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.787778 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.844841 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.845166 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.845292 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.850003 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities" (OuterVolumeSpecName: "utilities") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.852846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf" (OuterVolumeSpecName: "kube-api-access-phglf") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "kube-api-access-phglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.908524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948248 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948310 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948325 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.697861 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.698235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.726095 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.757044 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.767476 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:29 crc kubenswrapper[4829]: E0217 17:24:29.281326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:29 crc kubenswrapper[4829]: I0217 17:24:29.763180 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:29 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:29 crc kubenswrapper[4829]: > Feb 17 17:24:30 crc kubenswrapper[4829]: I0217 17:24:30.294671 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" path="/var/lib/kubelet/pods/eeb860ed-6cd7-4618-8ea7-158f7e3251d8/volumes" Feb 17 17:24:36 crc kubenswrapper[4829]: E0217 17:24:36.281823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:39 crc kubenswrapper[4829]: I0217 17:24:39.754117 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:39 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:39 crc kubenswrapper[4829]: > Feb 17 17:24:40 crc kubenswrapper[4829]: E0217 17:24:40.285040 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:49 crc kubenswrapper[4829]: E0217 17:24:49.290949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:49 crc kubenswrapper[4829]: I0217 17:24:49.745357 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:49 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:49 crc kubenswrapper[4829]: > Feb 17 17:24:55 crc kubenswrapper[4829]: E0217 17:24:55.282204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.756262 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.811765 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.996928 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:00 crc kubenswrapper[4829]: I0217 17:25:00.038527 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" containerID="cri-o://c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" gracePeriod=2 Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.050190 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" exitCode=0 Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.050270 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2"} Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.276223 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:25:01 crc kubenswrapper[4829]: E0217 17:25:01.281140 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.389847 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390044 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390148 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390803 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities" (OuterVolumeSpecName: "utilities") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.391436 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.397913 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp" (OuterVolumeSpecName: "kube-api-access-qd7vp") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "kube-api-access-qd7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.493797 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.521611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.595731 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.062989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d"} Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.063072 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.063302 4829 scope.go:117] "RemoveContainer" containerID="c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.100805 4829 scope.go:117] "RemoveContainer" containerID="3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.108059 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.118660 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.148809 4829 scope.go:117] "RemoveContainer" containerID="f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.295266 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" path="/var/lib/kubelet/pods/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc/volumes" Feb 17 17:25:06 crc kubenswrapper[4829]: E0217 17:25:06.282309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:12 crc kubenswrapper[4829]: E0217 17:25:12.282345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:17 crc kubenswrapper[4829]: E0217 17:25:17.284200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.138563 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-api/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.425872 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-evaluator/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.507287 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-listener/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.584971 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-notifier/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: E0217 17:25:24.286044 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.488752 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-744588c6bd-fsx8x_652438ae-668e-4017-a88c-c6737fd0db78/barbican-api/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.506913 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-744588c6bd-fsx8x_652438ae-668e-4017-a88c-c6737fd0db78/barbican-api-log/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.685185 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b9b6dfd6-gq6hn_5f483139-9fb6-4db6-8c40-846d8bd69556/barbican-keystone-listener/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.752769 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b9b6dfd6-gq6hn_5f483139-9fb6-4db6-8c40-846d8bd69556/barbican-keystone-listener-log/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.833386 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765797c7c9-2cts6_87043d23-60bf-443c-8db4-2679d7269f6c/barbican-worker/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.907610 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765797c7c9-2cts6_87043d23-60bf-443c-8db4-2679d7269f6c/barbican-worker-log/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.097072 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj_9f00333b-9c18-4a8c-b409-2961da9afccc/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.293360 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/proxy-httpd/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.328932 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/ceilometer-notification-agent/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.435508 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/sg-core/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.522203 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_816bca39-deec-496c-bb97-40d4ad4ca878/cinder-api-log/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.608449 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_816bca39-deec-496c-bb97-40d4ad4ca878/cinder-api/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.733991 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0feacb21-5300-40f2-bee7-fac4613c2977/cinder-scheduler/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.833814 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0feacb21-5300-40f2-bee7-fac4613c2977/probe/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.502015 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/init/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.726878 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/init/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.815029 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/dnsmasq-dns/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.899059 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bp7df_30690071-6fc2-4647-82c0-6e5234005aec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.087228 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q_60a577ad-f610-459b-9f2d-19c6bc6f356a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.170609 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5_9a6550f4-cdf2-4365-8ce4-96642f12822f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.346194 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pwplj_5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.610997 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw_70fdafba-a123-4ccf-bcde-f3027dcbbf1b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.755544 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-v8r24_6a1c73d0-1366-47dc-9726-b2a5d6ed3b86/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.922722 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt_c0fd9f61-596b-4ef3-b6da-6ebe6b04d497/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.004850 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_417e614d-4be6-439c-9fbc-65e970d1614f/glance-httpd/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.037526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_417e614d-4be6-439c-9fbc-65e970d1614f/glance-log/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.207051 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4708c572-1818-4307-8667-0e2cb60f5635/glance-log/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.218434 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4708c572-1818-4307-8667-0e2cb60f5635/glance-httpd/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.802283 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7bf669c95c-g7msn_be43e34b-d8ec-44cd-bc26-e0ce3c9797a7/heat-api/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.984519 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7db87d5bbf-dtdjh_59de3866-adfb-4a8d-87f2-b54af38332d0/heat-engine/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.061112 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-66bc7b8984-mg8sc_5dfe4b1a-5f10-47f3-ab81-0807c468fab0/heat-cfnapi/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.175487 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-868ff7b66c-lx7qv_c2a8da85-ca3d-4368-8a34-4db948e7f6f3/keystone-api/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.247818 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-jp96w_7522621b-701f-4bef-8232-25fb5b8abab1/keystone-cron/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.311481 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f57285ef-f362-4fb7-8f6c-633698507b3d/kube-state-metrics/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.543525 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e39a0dce-4da5-4ff4-9e50-e2dc41d22092/mysqld-exporter/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.828159 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5598cc6dcc-p2b29_298e03dd-93bc-4a68-8589-ecec2278efd5/neutron-api/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.875138 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5598cc6dcc-p2b29_298e03dd-93bc-4a68-8589-ecec2278efd5/neutron-httpd/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.387601 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_62d7182c-e529-468f-8022-9fd5fc66b554/nova-api-log/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.401332 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8f709715-5e80-4988-8eb5-8bebcd673c47/nova-cell0-conductor-conductor/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.532094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_62d7182c-e529-468f-8022-9fd5fc66b554/nova-api-api/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.919507 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_abe67602-ae51-43a0-b450-af654c573d9a/nova-cell1-conductor-conductor/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.012728 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fa5f0bda-7dee-4ea8-9b6c-ec30ce341044/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.131747 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e0afa824-7a82-41cc-9274-28689e2f3f57/nova-metadata-log/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: E0217 17:25:31.281674 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.497352 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_37d63bbb-2d26-4b85-8241-2785a5194a21/nova-scheduler-scheduler/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.557716 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/mysql-bootstrap/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.806552 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/mysql-bootstrap/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.863447 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/galera/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.044286 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/mysql-bootstrap/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.293805 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/mysql-bootstrap/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.364083 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/galera/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.521802 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4561ce68-ba71-42ad-95ec-de8b705a06ef/openstackclient/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.652694 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-75gff_e5adca8d-ac72-45d0-aa1c-3c453a78620e/ovn-controller/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.887596 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2hx8h_60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088/openstack-network-exporter/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.127242 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server-init/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.199736 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e0afa824-7a82-41cc-9274-28689e2f3f57/nova-metadata-metadata/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.345516 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server-init/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.350548 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.397434 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovs-vswitchd/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.777363 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_add70c30-2098-4686-bd7d-f693219a63b8/openstack-network-exporter/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.834718 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_add70c30-2098-4686-bd7d-f693219a63b8/ovn-northd/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.025915 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b04054b-6716-42c5-8e1b-d7eba2bcfe4c/openstack-network-exporter/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.041333 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b04054b-6716-42c5-8e1b-d7eba2bcfe4c/ovsdbserver-nb/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.167058 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2eeefec2-2e41-4278-8c9d-889dbf5f51ea/openstack-network-exporter/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.811146 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2eeefec2-2e41-4278-8c9d-889dbf5f51ea/ovsdbserver-sb/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.870827 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8b56fc4d-7pnvr_504197ea-58c2-445f-96a1-4b812028425d/placement-api/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.885231 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8b56fc4d-7pnvr_504197ea-58c2-445f-96a1-4b812028425d/placement-log/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.104954 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/init-config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: E0217 17:25:35.281250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.374742 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/init-config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.384783 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/prometheus/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.394773 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/thanos-sidecar/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.405005 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.614008 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.892661 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.937415 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.963787 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/rabbitmq/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.188450 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/rabbitmq/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.190587 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.191089 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.960324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.987969 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/rabbitmq/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.006431 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/setup-container/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.341667 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/setup-container/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.365657 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vzzfp_fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.372977 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/rabbitmq/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.634030 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t_2b2909c1-2feb-4fa2-8a7e-e406334ade24/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.841172 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-84gsz_81b1a5c5-d463-48ba-b0d2-4409299812cb/swift-ring-rebalance/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.884379 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d69d97dcf-pdd69_cd5d005a-eb7a-4cbc-932f-2640cb8068eb/proxy-server/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.912729 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d69d97dcf-pdd69_cd5d005a-eb7a-4cbc-932f-2640cb8068eb/proxy-httpd/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.124287 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-reaper/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.154939 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.270994 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.322838 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-server/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.430274 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.492930 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.575938 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-server/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.681913 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-updater/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.773526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.780987 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-expirer/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.831766 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.931288 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-server/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.017118 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/rsync/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.026577 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-updater/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.122776 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/swift-recon-cron/0.log" Feb 17 17:25:44 crc kubenswrapper[4829]: I0217 17:25:44.479759 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4e3198cb-0642-46be-a9e3-33db29446377/memcached/0.log" Feb 17 17:25:46 crc kubenswrapper[4829]: E0217 17:25:46.282731 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:49 crc kubenswrapper[4829]: E0217 17:25:49.282419 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:52 crc kubenswrapper[4829]: I0217 17:25:52.424669 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:25:52 crc kubenswrapper[4829]: I0217 17:25:52.425054 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:01 crc kubenswrapper[4829]: E0217 17:26:01.283621 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:04 crc kubenswrapper[4829]: E0217 17:26:04.282745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.638690 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.884735 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.911056 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.911291 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.109919 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.110458 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.161992 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/extract/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.614548 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-shssw_a711806b-ee8c-4fb8-b5da-da5e90ef06c6/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.046032 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7j8p7_bb32d7a2-68ff-4511-a04f-fa09657791db/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.484437 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9md4j_dd52262f-900a-4801-8c4c-f79787b6b715/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.583805 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-hmtfv_84a22a6b-1fb5-4959-9342-0bcc4b033b68/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.440316 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-t57qn_60ea5425-d352-4d97-bedf-f01d07c89949/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.491850 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-vxvp7_0e275e91-4b6e-419e-b076-a6e221f8a8ac/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.879380 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-nksk9_62cfcaa0-5c8a-4a67-95b7-83aa695a8640/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.157179 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-fw4gg_8642cada-3458-43cc-90aa-cf66a1cd6426/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.468371 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-gcxk7_5b6c89f9-2c4f-4bab-8d8b-cd746acb3426/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.478344 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-w97sk_f3add145-231f-4d7b-b9dd-115026b2a05e/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.786298 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-m4df4_3aab9223-4e3f-4657-afc2-91d0e0948542/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.936749 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-czbvb_f083cb81-0369-46de-9562-406736ae7e2f/manager/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: E0217 17:26:16.289686 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:16 crc kubenswrapper[4829]: E0217 17:26:16.289728 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.311946 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx_a1ec01cb-62ae-4855-b830-69f896bfb5a4/manager/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.805289 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-64549bfd8b-ksr2v_f5adeb4d-89fb-480c-a429-7cf978198db2/operator/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.993285 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6p47w_24ddb2b4-4194-4df5-8820-9ea9c405abc7/registry-server/0.log" Feb 17 17:26:17 crc kubenswrapper[4829]: I0217 17:26:17.356009 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-mnrxb_72028d3b-7fd0-4b17-b0c2-c92bc7134637/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.357382 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-274tg_958dea67-d633-4f5c-a18e-2aca1a55020c/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.588534 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fht2z_eaf75815-7964-4bc0-aeae-d3306764d7f4/operator/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.786821 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-546d579865-h84k8_aa745829-0443-47a5-8c10-701bd4645505/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.872446 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-thspt_4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.356705 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zbs8b_23c03a71-fe86-47ad-ae4b-dd49bc07f2b0/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.626649 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-ndxcg_2237138f-4450-415b-9646-c2ab9f88194a/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.656773 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-2xmzw_5239a5a9-e318-4db3-8394-0427d57d4ae5/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.757712 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66fcc5ff49-8lb5d_584ed73b-c202-4d41-b884-cd9c279b3c0d/manager/0.log" Feb 17 17:26:22 crc kubenswrapper[4829]: I0217 17:26:22.424059 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:22 crc kubenswrapper[4829]: I0217 17:26:22.425636 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:26 crc kubenswrapper[4829]: I0217 17:26:26.125365 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-dlskg_6084260e-35c2-43b5-9606-98e1e0463e98/manager/0.log" Feb 17 17:26:31 crc kubenswrapper[4829]: E0217 17:26:31.282250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:31 crc kubenswrapper[4829]: E0217 17:26:31.282369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:44 crc kubenswrapper[4829]: E0217 17:26:44.282655 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:46 crc kubenswrapper[4829]: E0217 17:26:46.282680 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.292311 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sqmls_2bfb2da7-1a85-42f9-8c3f-c7997e85dd58/control-plane-machine-set-operator/0.log" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.408151 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-47kpc_e8a98667-8884-4056-8577-3e7db8762ff9/kube-rbac-proxy/0.log" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.521382 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-47kpc_e8a98667-8884-4056-8577-3e7db8762ff9/machine-api-operator/0.log" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.424625 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.425241 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.425300 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.426309 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.426380 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" gracePeriod=600 Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.401941 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" exitCode=0 Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402066 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402875 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:26:58 crc kubenswrapper[4829]: E0217 17:26:58.291758 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:00 crc kubenswrapper[4829]: E0217 17:27:00.280842 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.183840 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mf5jl_476f8c4d-b180-40c8-b5a7-120565b0789f/cert-manager-controller/0.log" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.369704 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-29pr5_90365502-e574-4c31-b97b-ca69aac75648/cert-manager-cainjector/0.log" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.434817 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rzvp5_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad/cert-manager-webhook/0.log" Feb 17 17:27:12 crc kubenswrapper[4829]: E0217 17:27:12.282058 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:15 crc kubenswrapper[4829]: E0217 17:27:15.282591 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.203564 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-mchvp_df7e3d75-f36c-4258-ae86-6bb72db7c0e4/nmstate-console-plugin/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.374938 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-47lp4_4e62a7c0-ac99-4dd8-a587-58c98adb3a25/nmstate-handler/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.467969 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-85cbd_20b39811-2839-4b55-a69e-a293416edb22/kube-rbac-proxy/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.541312 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-85cbd_20b39811-2839-4b55-a69e-a293416edb22/nmstate-metrics/0.log" Feb 17 17:27:22 crc kubenswrapper[4829]: I0217 17:27:22.419176 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-v2bww_55a7b0a0-24f0-4b6b-82bf-f131f831af3a/nmstate-webhook/0.log" Feb 17 17:27:22 crc kubenswrapper[4829]: I0217 17:27:22.444077 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-lpfx5_e597d80c-fb6d-45a3-9b01-4a32a59f07a6/nmstate-operator/0.log" Feb 17 17:27:24 crc kubenswrapper[4829]: E0217 17:27:24.281014 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:29 crc kubenswrapper[4829]: E0217 17:27:29.283037 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:38 crc kubenswrapper[4829]: I0217 17:27:38.145912 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/kube-rbac-proxy/0.log" Feb 17 17:27:38 crc kubenswrapper[4829]: I0217 17:27:38.175173 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/manager/0.log" Feb 17 17:27:39 crc kubenswrapper[4829]: E0217 17:27:39.282636 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:44 crc kubenswrapper[4829]: E0217 17:27:44.281301 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:53 crc kubenswrapper[4829]: E0217 17:27:53.281755 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.122166 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwcb6_edb49e50-f230-48c5-b2e5-fe59a3ae73fa/prometheus-operator/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.275454 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-6q6r7_54e12496-0dd9-43a5-accb-e17546b7b715/prometheus-operator-admission-webhook/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.375288 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-vsf4q_a3ae1cd0-485d-4d83-8601-79d0c99bf9e8/prometheus-operator-admission-webhook/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.516282 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9xj96_9d3431d3-b6f2-4658-b45c-c428b77e98df/operator/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.577066 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vtctx_54f57142-2ddb-4c2f-a68e-ab77ff965e8c/observability-ui-dashboards/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.734196 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f6t4s_dd120281-015e-45a4-b1ae-f868b2326499/perses-operator/0.log" Feb 17 17:27:57 crc kubenswrapper[4829]: E0217 17:27:57.281543 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:07 crc kubenswrapper[4829]: E0217 17:28:07.281204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:09 crc kubenswrapper[4829]: E0217 17:28:09.282386 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:13 crc kubenswrapper[4829]: I0217 17:28:13.851319 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-csdvg_54232488-a26b-4bdf-8b89-381241b92b54/cluster-logging-operator/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.049065 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_c7dd4bfd-add5-4b6b-a938-5e8ae8433d10/loki-compactor/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.057506 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-j7l9k_768f24d9-7e75-4b78-a2a7-10cdfd579577/collector/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.232005 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-knrkx_3e78e45a-c46f-4cfd-a487-56fad3cb0649/loki-distributor/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.261430 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-6lhvz_52de54a3-9f80-412c-a925-25541914e2b0/gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.375013 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-6lhvz_52de54a3-9f80-412c-a925-25541914e2b0/opa/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.453158 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-8xxq9_38a2308f-5d3c-4dac-b105-3d42a6b7bdd1/gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.480768 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-8xxq9_38a2308f-5d3c-4dac-b105-3d42a6b7bdd1/opa/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.625202 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7bf847ac-1d33-4bad-8882-4661d8f33da8/loki-index-gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.773267 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_a7c5b31c-f45c-4a04-afc1-251ef93e471a/loki-ingester/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.838142 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-w7bl4_76340faf-b2e5-461e-9172-a03eee715830/loki-querier/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.996876 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-7v4zj_90856a62-8a7f-479c-af7e-a95b8292618a/loki-query-frontend/0.log" Feb 17 17:28:20 crc kubenswrapper[4829]: E0217 17:28:20.284666 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:21 crc kubenswrapper[4829]: E0217 17:28:21.281478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:32 crc kubenswrapper[4829]: I0217 17:28:32.600076 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-g4znl_1da62b69-54b6-4041-885f-acda828405c9/kube-rbac-proxy/0.log" Feb 17 17:28:32 crc kubenswrapper[4829]: I0217 17:28:32.791449 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-g4znl_1da62b69-54b6-4041-885f-acda828405c9/controller/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.324212 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.508100 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.525094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.551742 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.620716 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.819236 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.831713 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.841133 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.851292 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.083652 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/controller/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.092293 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.092785 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.100728 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: E0217 17:28:34.280838 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:34 crc kubenswrapper[4829]: E0217 17:28:34.282757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.360917 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/frr-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.361810 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/kube-rbac-proxy/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.387686 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/kube-rbac-proxy-frr/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.597498 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/reloader/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.638774 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-l8gzk_8ddfc374-12f8-443a-bcc1-526613e031bf/frr-k8s-webhook-server/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.857987 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848c6d5b-p864p_c5cf20c6-9fae-4c85-9c16-53e313c04cda/manager/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.073158 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6bd8598c46-74wvs_90b368e2-73a9-4594-8428-e17a7bb1e499/webhook-server/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.231895 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gr6k_a25680cc-e984-4ad7-95e2-3fe561a5fa8c/kube-rbac-proxy/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.933108 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gr6k_a25680cc-e984-4ad7-95e2-3fe561a5fa8c/speaker/0.log" Feb 17 17:28:36 crc kubenswrapper[4829]: I0217 17:28:36.098706 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/frr/0.log" Feb 17 17:28:45 crc kubenswrapper[4829]: E0217 17:28:45.298709 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:48 crc kubenswrapper[4829]: E0217 17:28:48.290113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:49 crc kubenswrapper[4829]: I0217 17:28:49.942699 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.176860 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.190200 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.203213 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.389616 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/extract/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.404207 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.404299 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.581270 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.761006 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.797866 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.804991 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.025799 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/extract/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.054276 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.067674 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.231140 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.400909 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.447170 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.455409 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.646489 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/extract/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.662172 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.678272 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.866106 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.083519 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.087194 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.090405 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.328526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.331452 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.424167 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.424236 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.657077 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.880132 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.944972 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.086217 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.235873 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/registry-server/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.271536 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.294774 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.168590 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.326985 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.415566 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.449637 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.723632 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/extract/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.774937 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.819049 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.826301 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/registry-server/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.925240 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.187698 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.187847 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.217525 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.943832 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.948200 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.994823 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/extract/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.066113 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dk6vq_1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9/marketplace-operator/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.149405 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.335403 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.341122 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.369839 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.629997 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.637372 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.669368 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.877835 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/registry-server/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.926995 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.943059 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.989130 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:57 crc kubenswrapper[4829]: I0217 17:28:57.179702 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:57 crc kubenswrapper[4829]: I0217 17:28:57.220910 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:58 crc kubenswrapper[4829]: I0217 17:28:57.999803 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/registry-server/0.log" Feb 17 17:29:00 crc kubenswrapper[4829]: E0217 17:29:00.282114 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:01 crc kubenswrapper[4829]: E0217 17:29:01.295446 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:12 crc kubenswrapper[4829]: E0217 17:29:12.283885 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.283254 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.395765 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.396082 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.396208 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.397399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.571949 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwcb6_edb49e50-f230-48c5-b2e5-fe59a3ae73fa/prometheus-operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.613686 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-6q6r7_54e12496-0dd9-43a5-accb-e17546b7b715/prometheus-operator-admission-webhook/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.633259 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-vsf4q_a3ae1cd0-485d-4d83-8601-79d0c99bf9e8/prometheus-operator-admission-webhook/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.775254 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9xj96_9d3431d3-b6f2-4658-b45c-c428b77e98df/operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.878224 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f6t4s_dd120281-015e-45a4-b1ae-f868b2326499/perses-operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.885324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vtctx_54f57142-2ddb-4c2f-a68e-ab77ff965e8c/observability-ui-dashboards/0.log" Feb 17 17:29:22 crc kubenswrapper[4829]: I0217 17:29:22.425499 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:22 crc kubenswrapper[4829]: I0217 17:29:22.426206 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.414822 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.415287 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.415419 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.416607 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:29 crc kubenswrapper[4829]: E0217 17:29:29.283379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:31 crc kubenswrapper[4829]: I0217 17:29:31.969675 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/kube-rbac-proxy/0.log" Feb 17 17:29:32 crc kubenswrapper[4829]: I0217 17:29:32.009229 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/manager/0.log" Feb 17 17:29:39 crc kubenswrapper[4829]: E0217 17:29:39.283926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:42 crc kubenswrapper[4829]: E0217 17:29:42.282151 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427140 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427881 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.428985 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.429053 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" gracePeriod=600 Feb 17 17:29:52 crc kubenswrapper[4829]: E0217 17:29:52.591012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930481 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" exitCode=0 Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930563 4829 scope.go:117] "RemoveContainer" containerID="2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.931537 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:29:52 crc kubenswrapper[4829]: E0217 17:29:52.932189 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:29:54 crc kubenswrapper[4829]: E0217 17:29:54.282977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:57 crc kubenswrapper[4829]: E0217 17:29:57.281183 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.190428 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191577 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191669 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191676 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191704 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191710 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191723 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191729 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191744 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191750 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191770 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191776 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.192049 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.192110 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.193124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.198934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.199177 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.223602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251520 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251730 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.354843 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.355090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.355428 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.357172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.362216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.380498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.532754 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:01 crc kubenswrapper[4829]: I0217 17:30:01.180144 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:01 crc kubenswrapper[4829]: W0217 17:30:01.197629 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7afba793_475b_494e_9c36_7e080ebc391b.slice/crio-aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a WatchSource:0}: Error finding container aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a: Status 404 returned error can't find the container with id aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.098838 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerStarted","Data":"0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3"} Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.099202 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerStarted","Data":"aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a"} Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.127288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" podStartSLOduration=2.127260952 podStartE2EDuration="2.127260952s" podCreationTimestamp="2026-02-17 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:30:02.126185993 +0000 UTC m=+5714.543203971" watchObservedRunningTime="2026-02-17 17:30:02.127260952 +0000 UTC m=+5714.544278930" Feb 17 17:30:03 crc kubenswrapper[4829]: I0217 17:30:03.111397 4829 generic.go:334] "Generic (PLEG): container finished" podID="7afba793-475b-494e-9c36-7e080ebc391b" containerID="0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3" exitCode=0 Feb 17 17:30:03 crc kubenswrapper[4829]: I0217 17:30:03.111787 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerDied","Data":"0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3"} Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.793637 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959691 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.960497 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume" (OuterVolumeSpecName: "config-volume") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.961106 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.968844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.969029 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls" (OuterVolumeSpecName: "kube-api-access-vs4ls") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "kube-api-access-vs4ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.064081 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.064127 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerDied","Data":"aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a"} Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136388 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136401 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.279928 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:05 crc kubenswrapper[4829]: E0217 17:30:05.280473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.952870 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.973318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 17:30:06 crc kubenswrapper[4829]: E0217 17:30:06.284080 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:06 crc kubenswrapper[4829]: I0217 17:30:06.292554 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" path="/var/lib/kubelet/pods/8ddee5a9-0539-4387-8a52-5a41ca147e35/volumes" Feb 17 17:30:09 crc kubenswrapper[4829]: E0217 17:30:09.283525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:17 crc kubenswrapper[4829]: E0217 17:30:17.282240 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.279567 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:20 crc kubenswrapper[4829]: E0217 17:30:20.280460 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.345755 4829 scope.go:117] "RemoveContainer" containerID="87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.368183 4829 scope.go:117] "RemoveContainer" containerID="ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.404491 4829 scope.go:117] "RemoveContainer" containerID="bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.456359 4829 scope.go:117] "RemoveContainer" containerID="829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.531482 4829 scope.go:117] "RemoveContainer" containerID="1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2" Feb 17 17:30:23 crc kubenswrapper[4829]: E0217 17:30:23.282104 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:31 crc kubenswrapper[4829]: E0217 17:30:31.283170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:35 crc kubenswrapper[4829]: I0217 17:30:35.279972 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:35 crc kubenswrapper[4829]: E0217 17:30:35.280691 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:38 crc kubenswrapper[4829]: E0217 17:30:38.289632 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:43 crc kubenswrapper[4829]: E0217 17:30:43.281092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:47 crc kubenswrapper[4829]: I0217 17:30:47.279190 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:47 crc kubenswrapper[4829]: E0217 17:30:47.280171 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:49 crc kubenswrapper[4829]: E0217 17:30:49.283263 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:58 crc kubenswrapper[4829]: E0217 17:30:58.293561 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:02 crc kubenswrapper[4829]: I0217 17:31:02.279679 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:02 crc kubenswrapper[4829]: E0217 17:31:02.280520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:04 crc kubenswrapper[4829]: E0217 17:31:04.280951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:10 crc kubenswrapper[4829]: E0217 17:31:10.287415 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:13 crc kubenswrapper[4829]: I0217 17:31:13.279674 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:13 crc kubenswrapper[4829]: E0217 17:31:13.280476 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:19 crc kubenswrapper[4829]: E0217 17:31:19.284863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:22 crc kubenswrapper[4829]: E0217 17:31:22.281611 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:27 crc kubenswrapper[4829]: I0217 17:31:27.279919 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:27 crc kubenswrapper[4829]: E0217 17:31:27.281022 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:32 crc kubenswrapper[4829]: E0217 17:31:32.281440 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:35 crc kubenswrapper[4829]: E0217 17:31:35.283529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.103833 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" exitCode=0 Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.103898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerDied","Data":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.105737 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:37 crc kubenswrapper[4829]: I0217 17:31:37.009644 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/gather/0.log" Feb 17 17:31:39 crc kubenswrapper[4829]: I0217 17:31:39.279813 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:39 crc kubenswrapper[4829]: E0217 17:31:39.280212 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.498487 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.499248 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bmblp/must-gather-bqwqp" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" containerID="cri-o://9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" gracePeriod=2 Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.513123 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:31:45 crc kubenswrapper[4829]: E0217 17:31:45.829237 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd6f0fc_6efb_4c69_8adc_11bfd6242c10.slice/crio-9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd6f0fc_6efb_4c69_8adc_11bfd6242c10.slice/crio-conmon-9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.005895 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/copy/0.log" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.007252 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.130156 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.130224 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.137969 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz" (OuterVolumeSpecName: "kube-api-access-c7bzz") pod "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" (UID: "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10"). InnerVolumeSpecName "kube-api-access-c7bzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228130 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/copy/0.log" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228778 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" exitCode=143 Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228874 4829 scope.go:117] "RemoveContainer" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.229006 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.233382 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") on node \"crc\" DevicePath \"\"" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.255312 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.314938 4829 scope.go:117] "RemoveContainer" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315007 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315466 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": container with ID starting with 9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5 not found: ID does not exist" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315537 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5"} err="failed to get container status \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": rpc error: code = NotFound desc = could not find container \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": container with ID starting with 9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5 not found: ID does not exist" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315567 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315898 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": container with ID starting with 9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133 not found: ID does not exist" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315921 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} err="failed to get container status \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": rpc error: code = NotFound desc = could not find container \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": container with ID starting with 9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133 not found: ID does not exist" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.343490 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" (UID: "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.438065 4829 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:31:47 crc kubenswrapper[4829]: E0217 17:31:47.283038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:48 crc kubenswrapper[4829]: I0217 17:31:48.294812 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" path="/var/lib/kubelet/pods/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/volumes" Feb 17 17:31:52 crc kubenswrapper[4829]: I0217 17:31:52.280247 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:52 crc kubenswrapper[4829]: E0217 17:31:52.282369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:00 crc kubenswrapper[4829]: E0217 17:32:00.282497 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:00 crc kubenswrapper[4829]: E0217 17:32:00.283511 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:05 crc kubenswrapper[4829]: I0217 17:32:05.280479 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:05 crc kubenswrapper[4829]: E0217 17:32:05.281367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:11 crc kubenswrapper[4829]: E0217 17:32:11.281730 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:12 crc kubenswrapper[4829]: E0217 17:32:12.282843 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:20 crc kubenswrapper[4829]: I0217 17:32:20.280342 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:20 crc kubenswrapper[4829]: E0217 17:32:20.281247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:22 crc kubenswrapper[4829]: E0217 17:32:22.281805 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:27 crc kubenswrapper[4829]: E0217 17:32:27.283929 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:33 crc kubenswrapper[4829]: I0217 17:32:33.280154 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:33 crc kubenswrapper[4829]: E0217 17:32:33.280949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:36 crc kubenswrapper[4829]: E0217 17:32:36.281844 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:41 crc kubenswrapper[4829]: E0217 17:32:41.282724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:45 crc kubenswrapper[4829]: I0217 17:32:45.280311 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:45 crc kubenswrapper[4829]: E0217 17:32:45.282904 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:47 crc kubenswrapper[4829]: E0217 17:32:47.281188 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:53 crc kubenswrapper[4829]: E0217 17:32:53.281737 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:59 crc kubenswrapper[4829]: E0217 17:32:59.281831 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:00 crc kubenswrapper[4829]: I0217 17:33:00.279380 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:00 crc kubenswrapper[4829]: E0217 17:33:00.280121 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:08 crc kubenswrapper[4829]: E0217 17:33:08.296246 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:10 crc kubenswrapper[4829]: E0217 17:33:10.281222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:15 crc kubenswrapper[4829]: I0217 17:33:15.280231 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:15 crc kubenswrapper[4829]: E0217 17:33:15.281166 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.797993 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801391 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801548 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801682 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801764 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801906 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801987 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802350 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802451 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802531 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.804243 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.821273 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.961633 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.962177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.962325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.064864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065118 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.091892 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.183120 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: E0217 17:33:22.288550 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.795380 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:22 crc kubenswrapper[4829]: W0217 17:33:22.797273 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80976c2_e7e3_4ad9_8eb9_6e14939fa5d0.slice/crio-c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0 WatchSource:0}: Error finding container c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0: Status 404 returned error can't find the container with id c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0 Feb 17 17:33:23 crc kubenswrapper[4829]: E0217 17:33:23.281258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355058 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" exitCode=0 Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355139 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428"} Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0"} Feb 17 17:33:24 crc kubenswrapper[4829]: I0217 17:33:24.368787 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.279713 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:27 crc kubenswrapper[4829]: E0217 17:33:27.280213 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.409084 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" exitCode=0 Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.409163 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} Feb 17 17:33:28 crc kubenswrapper[4829]: I0217 17:33:28.427096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} Feb 17 17:33:28 crc kubenswrapper[4829]: I0217 17:33:28.453114 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxkqc" podStartSLOduration=2.960580532 podStartE2EDuration="7.453067135s" podCreationTimestamp="2026-02-17 17:33:21 +0000 UTC" firstStartedPulling="2026-02-17 17:33:23.35854669 +0000 UTC m=+5915.775564668" lastFinishedPulling="2026-02-17 17:33:27.851033293 +0000 UTC m=+5920.268051271" observedRunningTime="2026-02-17 17:33:28.445532432 +0000 UTC m=+5920.862550410" watchObservedRunningTime="2026-02-17 17:33:28.453067135 +0000 UTC m=+5920.870085113" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.183993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.184509 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.235223 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:33 crc kubenswrapper[4829]: E0217 17:33:33.282719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:35 crc kubenswrapper[4829]: E0217 17:33:35.281155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:40 crc kubenswrapper[4829]: I0217 17:33:40.280037 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:40 crc kubenswrapper[4829]: E0217 17:33:40.281106 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.244825 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.298534 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.598768 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxkqc" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" containerID="cri-o://2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" gracePeriod=2 Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.137045 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236358 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236541 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236815 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.238015 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities" (OuterVolumeSpecName: "utilities") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.243978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5" (OuterVolumeSpecName: "kube-api-access-cz2j5") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "kube-api-access-cz2j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.295785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340214 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340283 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340300 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613272 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" exitCode=0 Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613324 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613350 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0"} Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613367 4829 scope.go:117] "RemoveContainer" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613366 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.648362 4829 scope.go:117] "RemoveContainer" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.664769 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.671726 4829 scope.go:117] "RemoveContainer" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.676051 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.745302 4829 scope.go:117] "RemoveContainer" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.745936 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": container with ID starting with 2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe not found: ID does not exist" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746113 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} err="failed to get container status \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": rpc error: code = NotFound desc = could not find container \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": container with ID starting with 2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe not found: ID does not exist" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746257 4829 scope.go:117] "RemoveContainer" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.746769 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": container with ID starting with 48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a not found: ID does not exist" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746802 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} err="failed to get container status \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": rpc error: code = NotFound desc = could not find container \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": container with ID starting with 48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a not found: ID does not exist" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746828 4829 scope.go:117] "RemoveContainer" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.747136 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": container with ID starting with ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428 not found: ID does not exist" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.747165 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428"} err="failed to get container status \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": rpc error: code = NotFound desc = could not find container \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": container with ID starting with ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428 not found: ID does not exist" Feb 17 17:33:44 crc kubenswrapper[4829]: I0217 17:33:44.300051 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" path="/var/lib/kubelet/pods/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0/volumes" Feb 17 17:33:46 crc kubenswrapper[4829]: E0217 17:33:46.285029 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:49 crc kubenswrapper[4829]: E0217 17:33:49.282867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:51 crc kubenswrapper[4829]: I0217 17:33:51.280214 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:51 crc kubenswrapper[4829]: E0217 17:33:51.281525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:59 crc kubenswrapper[4829]: E0217 17:33:59.284945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:03 crc kubenswrapper[4829]: I0217 17:34:03.279985 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:03 crc kubenswrapper[4829]: E0217 17:34:03.281890 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:04 crc kubenswrapper[4829]: E0217 17:34:04.282994 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:05 crc kubenswrapper[4829]: I0217 17:34:05.309143 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6d69d97dcf-pdd69" podUID="cd5d005a-eb7a-4cbc-932f-2640cb8068eb" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 17:34:11 crc kubenswrapper[4829]: E0217 17:34:11.281675 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:17 crc kubenswrapper[4829]: I0217 17:34:17.282024 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:17 crc kubenswrapper[4829]: I0217 17:34:17.283796 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.285628 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.389894 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.389958 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.390106 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.391427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:26 crc kubenswrapper[4829]: E0217 17:34:26.282103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:31 crc kubenswrapper[4829]: I0217 17:34:31.282474 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:31 crc kubenswrapper[4829]: E0217 17:34:31.283258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:32 crc kubenswrapper[4829]: E0217 17:34:32.282491 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.000533 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001864 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-utilities" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001884 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-utilities" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001906 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001945 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-content" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-content" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.002263 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.004540 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.044264 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065689 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168092 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168353 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.169267 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.169308 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.195675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.335341 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.417783 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.418135 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.418267 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.420207 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.969408 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.302993 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" exitCode=0 Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.303175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2"} Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.303230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"270179ade4b11a7d177cfee64fe4570654b2234b20cc90c73fa23cd98e67c217"} Feb 17 17:34:43 crc kubenswrapper[4829]: E0217 17:34:43.282070 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:43 crc kubenswrapper[4829]: I0217 17:34:43.351619 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} Feb 17 17:34:45 crc kubenswrapper[4829]: I0217 17:34:45.374079 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" exitCode=0 Feb 17 17:34:45 crc kubenswrapper[4829]: I0217 17:34:45.374173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.280095 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:46 crc kubenswrapper[4829]: E0217 17:34:46.280771 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.387103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.412935 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mlxs" podStartSLOduration=2.941111277 podStartE2EDuration="7.412916501s" podCreationTimestamp="2026-02-17 17:34:39 +0000 UTC" firstStartedPulling="2026-02-17 17:34:41.3061272 +0000 UTC m=+5993.723145178" lastFinishedPulling="2026-02-17 17:34:45.777932424 +0000 UTC m=+5998.194950402" observedRunningTime="2026-02-17 17:34:46.40251219 +0000 UTC m=+5998.819530178" watchObservedRunningTime="2026-02-17 17:34:46.412916501 +0000 UTC m=+5998.829934479" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.335985 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.336512 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.392103 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:51 crc kubenswrapper[4829]: E0217 17:34:51.281912 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:56 crc kubenswrapper[4829]: E0217 17:34:56.283182 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.279683 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.395348 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.460373 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.576282 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mlxs" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" containerID="cri-o://3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" gracePeriod=2 Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.117561 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.315779 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.316248 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.316557 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.317909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities" (OuterVolumeSpecName: "utilities") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.321786 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96" (OuterVolumeSpecName: "kube-api-access-k2s96") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "kube-api-access-k2s96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.366930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420426 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420476 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420487 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.588334 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"671f1cb3fbc562660eb7c1e1869f59b0a300c8fa64e35695004296799dbe493d"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590534 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" exitCode=0 Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"270179ade4b11a7d177cfee64fe4570654b2234b20cc90c73fa23cd98e67c217"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590630 4829 scope.go:117] "RemoveContainer" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590691 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.615629 4829 scope.go:117] "RemoveContainer" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.655915 4829 scope.go:117] "RemoveContainer" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.688834 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.709218 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.720426 4829 scope.go:117] "RemoveContainer" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.721564 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": container with ID starting with 3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044 not found: ID does not exist" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.721627 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} err="failed to get container status \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": rpc error: code = NotFound desc = could not find container \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": container with ID starting with 3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044 not found: ID does not exist" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.721652 4829 scope.go:117] "RemoveContainer" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.724643 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": container with ID starting with 98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77 not found: ID does not exist" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.724803 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} err="failed to get container status \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": rpc error: code = NotFound desc = could not find container \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": container with ID starting with 98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77 not found: ID does not exist" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.724944 4829 scope.go:117] "RemoveContainer" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.725772 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": container with ID starting with cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2 not found: ID does not exist" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.725805 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2"} err="failed to get container status \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": rpc error: code = NotFound desc = could not find container \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": container with ID starting with cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2 not found: ID does not exist" Feb 17 17:35:02 crc kubenswrapper[4829]: I0217 17:35:02.298663 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" path="/var/lib/kubelet/pods/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a/volumes" Feb 17 17:35:04 crc kubenswrapper[4829]: E0217 17:35:04.281618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:08 crc kubenswrapper[4829]: E0217 17:35:08.299930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:15 crc kubenswrapper[4829]: E0217 17:35:15.282937 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.829509 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830768 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-utilities" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830789 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-utilities" Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830832 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-content" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830841 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-content" Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830870 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830880 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.831163 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.833246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.851523 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.964658 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965114 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965177 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965562 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.983526 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:18 crc kubenswrapper[4829]: I0217 17:35:18.159380 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:18 crc kubenswrapper[4829]: I0217 17:35:18.686481 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174716 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" exitCode=0 Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5"} Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"b7f14e59773190d0d34da9bcb850d95b1c5c18a49c66d9e83683819501e4e491"} Feb 17 17:35:20 crc kubenswrapper[4829]: I0217 17:35:20.187131 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} Feb 17 17:35:20 crc kubenswrapper[4829]: E0217 17:35:20.281780 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:25 crc kubenswrapper[4829]: I0217 17:35:25.243522 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" exitCode=0 Feb 17 17:35:25 crc kubenswrapper[4829]: I0217 17:35:25.243656 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} Feb 17 17:35:26 crc kubenswrapper[4829]: I0217 17:35:26.256675 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} Feb 17 17:35:26 crc kubenswrapper[4829]: I0217 17:35:26.283610 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqswj" podStartSLOduration=2.729500739 podStartE2EDuration="9.283590546s" podCreationTimestamp="2026-02-17 17:35:17 +0000 UTC" firstStartedPulling="2026-02-17 17:35:19.177431433 +0000 UTC m=+6031.594449411" lastFinishedPulling="2026-02-17 17:35:25.73152124 +0000 UTC m=+6038.148539218" observedRunningTime="2026-02-17 17:35:26.274227713 +0000 UTC m=+6038.691245701" watchObservedRunningTime="2026-02-17 17:35:26.283590546 +0000 UTC m=+6038.700608534" Feb 17 17:35:28 crc kubenswrapper[4829]: I0217 17:35:28.160121 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:28 crc kubenswrapper[4829]: I0217 17:35:28.160464 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:29 crc kubenswrapper[4829]: I0217 17:35:29.223196 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqswj" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerName="registry-server" probeResult="failure" output=< Feb 17 17:35:29 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:35:29 crc kubenswrapper[4829]: > Feb 17 17:35:30 crc kubenswrapper[4829]: E0217 17:35:30.283087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:35 crc kubenswrapper[4829]: E0217 17:35:35.282456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.219354 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.274906 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.463057 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:39 crc kubenswrapper[4829]: I0217 17:35:39.394763 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqswj" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerName="registry-server" containerID="cri-o://1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" gracePeriod=2 Feb 17 17:35:39 crc kubenswrapper[4829]: I0217 17:35:39.986398 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.121612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.122029 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.122068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.126560 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities" (OuterVolumeSpecName: "utilities") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.146840 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld" (OuterVolumeSpecName: "kube-api-access-8lxld") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "kube-api-access-8lxld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.229753 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.230006 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.295787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.331932 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407202 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" exitCode=0 Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"b7f14e59773190d0d34da9bcb850d95b1c5c18a49c66d9e83683819501e4e491"} Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407319 4829 scope.go:117] "RemoveContainer" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407304 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.452767 4829 scope.go:117] "RemoveContainer" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.479357 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.497057 4829 scope.go:117] "RemoveContainer" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.500419 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.566172 4829 scope.go:117] "RemoveContainer" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.567064 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": container with ID starting with 1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409 not found: ID does not exist" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567132 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} err="failed to get container status \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": rpc error: code = NotFound desc = could not find container \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": container with ID starting with 1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409 not found: ID does not exist" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567204 4829 scope.go:117] "RemoveContainer" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.567726 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": container with ID starting with 8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679 not found: ID does not exist" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567800 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} err="failed to get container status \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": rpc error: code = NotFound desc = could not find container \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": container with ID starting with 8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679 not found: ID does not exist" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567863 4829 scope.go:117] "RemoveContainer" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.568146 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": container with ID starting with fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5 not found: ID does not exist" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.568195 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5"} err="failed to get container status \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": rpc error: code = NotFound desc = could not find container \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": container with ID starting with fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5 not found: ID does not exist" Feb 17 17:35:42 crc kubenswrapper[4829]: I0217 17:35:42.292607 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" path="/var/lib/kubelet/pods/a485b000-0c0b-48e7-9286-f8e155eb02cf/volumes" Feb 17 17:35:44 crc kubenswrapper[4829]: E0217 17:35:44.282910 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:47 crc kubenswrapper[4829]: E0217 17:35:47.282353 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:58 crc kubenswrapper[4829]: E0217 17:35:58.290232 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:59 crc kubenswrapper[4829]: E0217 17:35:59.281867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:36:10 crc kubenswrapper[4829]: E0217 17:36:10.287811 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:36:11 crc kubenswrapper[4829]: E0217 17:36:11.281597 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:36:23 crc kubenswrapper[4829]: E0217 17:36:23.282756 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:36:26 crc kubenswrapper[4829]: E0217 17:36:26.282468 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:36:38 crc kubenswrapper[4829]: E0217 17:36:38.291193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:36:40 crc kubenswrapper[4829]: E0217 17:36:40.283475 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:36:51 crc kubenswrapper[4829]: E0217 17:36:51.281429 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:36:53 crc kubenswrapper[4829]: E0217 17:36:53.281915 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:37:02 crc kubenswrapper[4829]: E0217 17:37:02.282187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:37:08 crc kubenswrapper[4829]: E0217 17:37:08.318371 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:37:14 crc kubenswrapper[4829]: E0217 17:37:14.283554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:37:22 crc kubenswrapper[4829]: I0217 17:37:22.424874 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:37:22 crc kubenswrapper[4829]: I0217 17:37:22.425565 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:37:23 crc kubenswrapper[4829]: E0217 17:37:23.281461 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:37:25 crc kubenswrapper[4829]: E0217 17:37:25.281901 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:37:35 crc kubenswrapper[4829]: E0217 17:37:35.282179 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:37:40 crc kubenswrapper[4829]: E0217 17:37:40.282641 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:37:46 crc kubenswrapper[4829]: E0217 17:37:46.283275 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:37:52 crc kubenswrapper[4829]: I0217 17:37:52.425086 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:37:52 crc kubenswrapper[4829]: I0217 17:37:52.425675 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145123574024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145123575017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145107121016502 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145107122015453 5ustar corecore